Rebecca NessonA graphical pseudo-3D environment in an Android app

Rebecca Nesson posted on Tuesday, March 13th, 2012 | Android, iPhone, Mobile | 2 Comments

At PRX we’re lucky to get to work with creative, fun-loving clients who want their apps to be more interesting to play with than the average app made from iOS standard components or Android widgets.  In one app we’re currently developing, we’re creating an engaging and fun pop-up book style environment in which the user encounters the program content as she navigates through an imaginary world.  It’s beautiful and fun and a real programming challenge.  On the iOS side, Devin created the 3D-ish environment using native iOS layers positioned in 3D space.  It’s my job to create the same effect in the Android version of the app.  The native views in Android don’t support this kind of positioning in z space and there isn’t a build in “camera” that can be transformed to give the illusion of depth.  OpenGL could provide the 3D environment, but it would be a steep learning curve for me and it would make it harder to use the usual Android widgets and activities for performing the basic functions of the app like presenting lists of content and playing audio.  Enter AndEngine.

AndEngine is a free 2D game engine for Android.  It allows the creation of a game activity that I could combine with other Android activities to present content.  (I use Android Fragments via the Android Support V4 library to incorporate traditional Android views into the game environment.)  Although AndEngine is intended for 2D games, a forum thread demonstrated how to do the same perspective trick to the camera we’re doing on the iOS side:

 private void setFrustum(GL10 pGL)
 {
    // set field of view to 60 degrees
   float fov_degrees = 60;
   float fov_radians = fov_degrees / 180 * (float)Math.PI;

   // set aspect ratio and distance of the screen
   float aspect = this.getWidth() / this.getHeight();
   float camZ = this.getHeight()/2 / (float)Math.tan(fov_radians/2);

   // set projection
   GLHelper.setProjectionIdentityMatrix(pGL);
   GLU.gluPerspective(pGL, fov_degrees, aspect, camZ/10, camZ*10);

   // set view
   GLU.gluLookAt(pGL, 0, 0, camZ, 0, 0, 0, 0, 1, 0); // move camera back
   pGL.glScalef(1,-1,1); // reverse y-axis
   pGL.glTranslatef(-CAMERA_WIDTH/2,-CAMERA_HEIGHT/2,0); // origin at top left
}

What’s happening here is that the camera is being pulled back away from the scene and a perspective transform is being applied that causes things in the distance to appear farther away.  I can’t explain it any better than the cryptic m34 transform that is applied to the camera on the iOS side, but the effect is the same.

The only other modification I had to make to AndEngine was to create a 3D sprite class that wraps the provided Sprite class and allows the user to set the z position of sprites as well as their x,y position.  In our app world the user doesn’t interact directly with the scene but rather with an scrolling mechanism that moves the scene “on rails” as the user scrolls.  The effect is beautiful but also somewhat hard to capture in screenshots.  You’ll just have to buy the app when it comes out!

The good news is, the app is shaping up beautifully and AndEngine has really come through for what we needed to do.  But there’s a big remaining issue that I’d like to solve.  AndEngine takes care of all of the touches on the scene and passes them to the sprites.  But it does it based on their x,y coordinates.  Unfortunately, the x,y coordinates it calculates based on the touches of the screen do not correspond to the location of the sprites within the scene because of the perspective transformation based on depth.  Under the covers OpenGL knows where the sprites are because it drew them correctly on the screen, but AndEngine itself does not know.  Additionally, I can only get access to a GL10 instance which does not provide the functions I need to project and unproject coordinates.  For now I’m working around this issue, but theoretically I should be able to do the math to convert 2D screen coordinates into 3D scene coordinates using the ratio of the scene size to the screen size, the position of the camera, the angle of view, and the distance of the object in question from the camera.  So far I haven’t succeeded in doing it, but when I get a few days to step back from the project I’ll turn to it again.  If you think you know how it should be done, please comment!

Matt MacDonaldPledge your support for mobile

Matt MacDonald posted on Wednesday, February 29th, 2012 | Mobile | 1 Comment

Hi,

I wanted to make a quick plug for a session that I’m organizing at the 2012 Integrated Media Association conference next week. The session is titled: Pledge your support for mobile and it will be both an overview of current pledge/donation practices and also a hands on working session. We’ll review existing pledge and donation practices, what works and where people get hung up and then both the session panelists and attendees will attempt to streamline and significantly improve the pledge process for public radio and television stations.

The major themes for the IMA conference this year are:

  1. Motivate attendees to take individual ownership for innovation
  2. Provide the attendees with knowledge that they can take home and use immediately

I’m really excited about our session panelists and facilitators, I’ve worked with each of them in the past on mobile and tablet projects. Devin Chalmers (KCRW Music Mine iPad, Radiolab iPhone), Dan Saltzman from Effective UI (KCRW Music Mine iPad) and Marine Boudeau from New York Public Radio (WNYC, WQXR, Radiolab iPhone and Android). Each has worked outside of public media and has extensive experience in user centered design and interfaces.

I’m also excited to try out a hybrid session with both a conference panel and then a hands on hackathon approach. My hope is that we all learn some new things, then create something new and all leave with actionable steps. This will be a session where attendees will be encouraged to participate, one where we use our collective strengths to improve a process that is ripe for revamp. If you are attending IMA I’m hoping you can join us.

Our session is on Thursday March 8th starting at 2pm.

I’ve uploaded my in-progress slides for the session in an attempt to entice people to come checkout our session.

Andrew KuklewiczBenjamen Walker Is Listening

Andrew Kuklewicz posted on Friday, January 13th, 2012 | Uncategorized | No Comments

It’s his birthday, go leave him a message.

If you are too lazy for that link, just call: (877) 815-3013

How does it work? Here is the secret formula:

@benjamenwalker + 40 * ( twilio + heroku + three.js + jplayer) = happy birthday

I worked on this over the course of a day and a half; the tools are out there people to make incredible stuff, and fast, even if you are an aging coder like me. Go make something!

(I also got some design inspiration from @loveandradio, but I lined everything up on the other side of the screen, so it is completely different. Also, I’ve only seen Nick’s head spin around like that when I got a concussion.)

Andrew KuklewiczFacebook Open Graph and Public Radio

Andrew Kuklewicz posted on Saturday, September 24th, 2011 | Big Data, Social Media | No Comments

Watch what you’re doing

Love it or mock it, Facebook’s on to something with the latest beta features for Open Graph. I’m not saying they are the first to realize there is more to the world than liking, as scrobbling on last.fm has been going on for awhile, and we even worked with Doc Searls to do listen log, which has the added (and hopefully inevitable) twist that you own your own data about what you hear/read/like/hate. Imagine that, not just giving that kind of useful, private information away to some company? But I digress.

Many have seen the writing on the wall that personal analytics (i.e. my latest favorite is RunKeeper) is a natural outcome (or accomplice) of the big data and social networking trends, but Facebook is in a position to take it to 11. Not to mention they have real motivation, with the competition at google+ looking for a differentiator or twenty. I embrace personal analytics as providing a new perspective on one’s self, and a way to know each other better.

In public media, during pledge drives, stations often ask listeners to imagine how many hours they listen in a year, and what value that has to them. Why imagine or guess at that when we can actually answer the question with data?

“With ListenLog data, listeners can make fully informed choices about how they support streams, podcasts and on-demand sources of programming.”

…and that is every bit as true for what Facebook wants to start logging. I don’t know who’s version of media logging will ‘win’, and as we’ve learned from friendster and myspace, winning is not a permanent condition, but the concept is powerful, and the potential effect on our behavior profound. Or at least, I hope it is. I also hope to find a way to have my cake and eat it too; own my data, but share intentionally. Facebook may not be there yet (not extreme understatement), but it can’t be ignored in the meantime.

And now what?

For listeners to public radio, what we listen to is as much a part of our lives as the TV and movies we watch, the music we hear, and the books we read. No one is forcing anyone to log this stuff, but if someone should want to, be it for their own curiosity, what it says about them, or just amusement, we should make sharing this easy, and as complete as other commercial media. In fact, we should do better – folks should be proud to share public media, and lead others to that experience as well.

While I have focused so far on media logging, we have another action besides ‘listen’ that we want to encourage and shout from the pixelated rooftops: ‘support’. I want to see that early and often, whether it is supporting a broadcast, stream, program, episode, network, host or even a program that hasn’t been made yet.

Brass tacks

I can’t write a labs blog without getting to something more technical, they’ll take away my public keys.

Here’s some ideas I think come next, and I hope others in public media think and will do the same. Hopefully we can even agree and work together (it has been known to happen):

  1. There are movie, video and TV action and object types already defined. I think these should be adopted by public/community TV, with perhaps some embroidery over time. As they say, “adopt and extend”, not analysis seems like the way to go.
  2. Audio is lacking standard definition of simple actions (e.g. listen), though for video/TV there is a built-in ‘watch’ action defined that is a good basis. We also need to define action types for public media (e.g. ‘support’ leaps to mind).
  3. We also need to define audio media objects for radio programs, episodes, stories, segments, streams, and podcasts. Probably others have started to do this as well in their facebook apps, but wouldn’t be nice if at least in public media we could agree? Audio is where we need to do work; news stories (e.g. Articles) and video/TV are pretty well covered in the built-in object types.
  4. With these standard objects and actions, then we need to make this easy to log. That means adding this to however users listen. For PRX, that means integrating optional Facebook open graph listen logging to our station and program mobile apps and the Public Radio Player, and websites like prx.org.
  5. We also need to make it easier for station websites to log this listening activity, just as they do facebook and other kinds of sharing now. Integrating listen logging into the media players via open source drupal and wordpress plugins seems to be the easiest way to do that, since between those two you seen a majority of new work, especially with the drupal work by NPR on Core Publisher, PBS on bento, APM using Drupal for the new Marketplace site, and PRI shows like To The Best Of Our Knowledge developing their new site on drupal with a talented team at Wisconsin Public Radio. And that’s just to name a few. Figure out how to integrate with these drupal distros, and you get a big bang for your buck.
  6. IMA launched Public Media Metrics to combine traffic data from all public media websites. This worked because we all have learned to share, and when we all use Google Analytics, it is comparing apples to apples. If we can standardize on the actions and objects used in Facebook’s open graph, not only is it less confusing, but there is a similar opportunity for aggregating social media metrics.

Some commercial and start-up media companies already have a jump on us with integrating these facebook beta features, and that’s fine, but let’s not wait too long to catch up and exceed them. Our capacity or cooperation also means we have a unique chance to combine our efforts across public media for design, implementation and measurement. We have millions of listeners and viewers; we should let them prove it.

Daniel GrossExploring The Depths Of KCRW Music Mine: Part One

Daniel Gross posted on Monday, September 19th, 2011 | iPad, KCRW, Mobile | No Comments

KCRW Music Mine launched recently to rave reviews. The free iPad app is the result of a collaboration between KCRWRoundArch and PRX, which led the project and development efforts. Music Mine pushes the limits of the tablet platform to create a truly unique music discovery experience. Daniel Gross interviews Matt MacDonald, PRX’s Director of Project Management, about the app.

What can Music Mine do?

The app is a iPad music player that opens up to a grid of 100 songs from artists that played on KCRW shows in the last week. What the app does is highlight these tracks that have been hand-picked by a KCRW DJ who wanted you to hear it. You might find odd pairings, maybe TV on the Radio right next to the newest Jeff Bridges country song. What they have in common is that these are all great songs that were picked by humans with taste and a point of view, not selected by an algorithm. The app lets you explore each of these artists in greater depth and we use software to add supporting songs, videos, photos, bios and blog posts.

How did it all get started?

We started talking with KCRW in the summer of 2010 about building an iPad app. PRX has been doing iOS development since late 2008 with apps including the Public Radio Player for iPhone and This American Life app for iPhone, iPad and Android. So we knew we could make this project ambitious.

As a public radio station, KCRW produces a ton of content. But we wanted to build an app tightly focused on their music. Anil Dewan at KCRW helped us understand their online music identity. But given the existing competition facing KCRW and listening platforms, we were asking: how the heck do we stand out and differentiate ourselves?

And? What makes Music Mine so different from other apps?

The biggest difference is Music Mine’s human touch. Other music apps like Pandora, Discover, Spotify or Last.fm are designed to provide access to every song and every artist in their huge catalogs. When you’re listening to Wilco, say, those apps use an algorithm to automatically recommend the next 5 or 6 ‘related’ artists. KCRW has a huge and diverse catalog but we wanted to amplify and highlight the work that the DJs do by narrowing what you should be listening to.

The user interactions in the app have an attitude and a point of view like the KCRW DJs. One goal for us was to encourage, musical exploration and delight so some of our interaction decisions basically force you to try out music that you might not be familiar with. We made sure to encourage that by not adding features like search or sorting and filtering tools. We also could have directly presented the underlying ordered list that backs the grid layout but we felt that doing so would have made you less likely to try out new songs and that you might only search and tap those you know.

So you want an app with people behind it – but you still want to take advantage of technology, right? What do you need to do that?

Yep, we definitely want to use technology to help us out here. Our office is in Harvard Square, and the local tech scene here is pretty amazing. One of the companies we’d wanted to work with for a long time was The Echo Nest over in Somerville. When we started dreaming up the Music Mine idea we went straight to them.

The Echo Nest provides APIs that allow us to request a ton of information about artists and songs, found audio, video, relevant blog posts, bios and photos. We knew that merging the KCRW playlists with Echo Nest data sources would give us the ability to create a unique window into the KCRW music universe. Rebecca Nesson here at PRX led the effort to merge the data sources from KCRW and Echo Nest. While each of the songs available in the grid is initially picked by a human, PRX wrote software to help narrow down the thousands of tracks that could be available to the 100 available on the grid.

How did the project evolve as you built and rebuilt the product?

A big question early in the project was: should the primary experience for how people listen in the app be focused on the long, multi-hour DJ sets, or center on playback of individual tracks? PRX and RoundArch interviewed people specifically about what they might want in a music discovery app from KCRW and after hearing from them we decided to focus the experience on track-by-track selection. That decision proved challenging and mid-way through the project there was a point where we really had to consider ditching the track-by-track access and move back to a DJ set focused app.

The sub-views in the app like the artist photo, song, video, bio, blog pages came together pretty quickly and didn’t change much over the project but the home view, the grid, was definitely an area where we iterated on ideas for a long time. Yeah — that took a really long time. We might have had 8 or 10 significant tweaks to the layout and presentation. But I think all the time we spent iterating on the ideas really paid off in the execution.

Alternate DJ set focused home view

Early DJ focused home view

Initial grid comp

Final grid design.

How do you feel about it? Is there anything you’d improve or add?

Well…no. Actually, I’m really happy with it. I just re-read the original product vision and I think we’ve really held true to that over the last ten or so months.

The KCRW iPad Experience project will focus on creating an iPad experience that quickly engages target users by leveraging KCRW’s unique style, taste and hand picked music. In addition, tight conceptual focus and appealing interaction models will aid in driving long-term user engagement.

People that we interviewed about what they might want in an music discovery app said they wanted us to make good music important to them, help them find new good music and improve their awareness. We’ve done that. People wanted us to connect them with great new artists, and help them learn more about music and artists they’re already familiar with. I think we’ve done that too.

What’s next from the PRX team?

I’m really excited about an iPhone project that we’re working on with a very popular public radio show. I feel like we’re developing a unique way to help people interact and connect with the show. We’re pushing the device’s capabilities, coming up with new interactions that can surprise and delight people.

Any last thoughts?

People talk about software as engineering, and it’s partially that, but it’s also a craft. This kind of work is like building a beautiful piece of furniture. When it’s done poorly, you end up with a cheap nothing that you throw away. But when it’s done well, you’ll never see all the work that went into it. Sometimes that means a lack of appreciation for the thought and care that was put in.

Devin Chalmers, Rebecca Nesson and Andrew Kuklewicz are top-notch developers, maybe the best people I’ve ever worked with. But even with the best people working together, it requires time and freedom to explore and do it right. We could have finished a long time ago if we’d done it poorly. Instead, we came up with something that we’re proud of, KCRW is happy with, and it seems so are the people using it.

Part Two

Coming up: Rebecca Nesson and Devin Chalmers will be digging into other more technical aspects of the KCRW Music Mine app, perspective transformations on scroll views, knob-twiddly stuff to tweak behavior and how we had to figure out layouts of a 4×6 grid in both portrait and landscape modes.

Chris RhodenOur Experience with Chef

Chris Rhoden posted on Tuesday, July 5th, 2011 | Ruby | No Comments

Part of the rollout for some automation features we have been working on is a requirement that we have a very scalable set of FTP servers. We’ve been using Amazon Web Services for many things at PRX, and the ability to spin up very cheap instances on both sides of the country is a huge win for performance and reliability. Unfortunately, there’s a fair amount of work that needs to be done when a new server is spun up. Because the specifics of this work changes with some frequency and because we have a number of different kinds of servers we need to be able to spin up (and more are coming), the standard use of AMIs will be a nightmare to maintain.

Enter Chef.

Chef allows us to develop (in source control) a system which describes both a particular configuration we would like our servers to have, and how to achieve this configuration. The configuration files are implemented in ruby, so the learning curve is primarily around nomenclature (and there’s a whole lot of it to learn). That having been said, most of the new concepts introduced by chef make sense once the system at large, “clicks.” I don’t expect a blog post to explain all of the nuance of an infrastructure configuration framework, but I think I can give enough introduction that one can walk away confident in their ability to do some simple deployments.

Clients and Servers

An Example Chef Deployment

An Example Chef Deployment

The first thing one needs to learn about Chef is the notion of a server and a client. A chef server holds information about the chef clients ((This is not strictly true – chef makes a distinction between clients and nodes; the former being something which interacts with the chef server and the latter being something which is assigned a set of code to execute. In nearly every case, however, the two are interchangeable. Just remember that when you assign something a role or recipe, you are assigning a node, and when you download updated configuration details from the chef server, you are using a client.)) in your deployment environment and what configuration they should have in order to be considered set up. It also stores the code you have written so that clients can achieve the expected configuration, but it never executes it. A chef server can be thought of, in simple terms, as a database.

One can install the server themselves, or can opt to use the Opscode platform, which is free for smaller deployments.

Clients pull down information about how they are expected to configure themselves from the chef server, and then execute the code you have written to get to that point. They do this by manually or periodically running chef-client, which reads from the chef server what code it will execute, then downloads and executes it.

Cookbooks and Recipes

The ruby code stored on a chef server is grouped into files called recipes, and those recipes are grouped into directories called cookbooks. Usually, cookbooks are collections of recipes that relate to a specific software package. For instance, one might write a cookbook for the Apache web server that included a recipe for the web server itself, a recipe for mod_ssl, and a recipe for mod_dav.

Many cookbooks (and, by extension, many many recipes), are available on the opscode community website. If you see one you would like to install, you can use the knife cookbook site install <cookbook name> command. Recently, Opscode did a major overhaul and made sure that many of these cookbooks are in working order. Please be aware, however, that many of them were written months or years ago, and may require some tweaking, especially if the software package they work with changes often.

Run Lists and Roles

Another construct to be aware of is the, run list. These are usually recipes, listed in order, that should be executed by the client every time it pulls information down from the chef server. Chef also supports roles, which are run lists in their own right, but can be referenced in other roles and run lists. In our current deployment system, we have a few roles which each contain several recipes, and each Chef client is configured to have only one or two roles in its run list.

Starting a Chef-Managed Instance

The typical workflow for starting a new instance is to spin one up, install chef, register it as a client with your chef server, add things to the client’s run list on the chef server, and then run chef-client to pull down the run list and execute it. Quite a mouthful!

Thankfully, in practice, this can easily be rolled up into one step. Because we are making use of Amazon EC2 at PRX, we installed the knife-ec2 gem and we can run knife ec2 server create -r "run list". This spins up an ec2 instance, bootstraps chef, registers it as a client with the chef server, and sets the run list appropriately. It also runs chef-client automatically when everything is set up, so one should have a new EC2 instance with all of the software one wants installed and configured.

Wrapping Up

We’ve covered the actual steps involved in using a configured chef server to deploy new EC2 instances, but we haven’t touched on how to actually configure the server. For that, I will refer you to the fantastic Chef Repo article on the Opscode Wiki, which will explain the directory structure and basic workings of the repository you will use to interact with the chef server.

I found much of the language involved in learning the chef framework confusing when I was learning it, and I hope that this article serves to make the more fundamental concepts that are needed a little easier to understand.

Chris KalafarskiBuilding a Better Contact Sheet

Chris Kalafarski posted on Monday, April 25th, 2011 | iPhone, Mobile | No Comments

Last week we decided to update the Photoshop file we provide to radio stations to allow them to customize the look of our core station iPhone app. There were two main problems with the version we had been using, the first being simply the organization of the layers. There was no standardized mapping of layers in the file to the individual files we need to use in the final app. The other problem was that our process for breaking the file down into its component parts was extremely time consuming. We were using an AppleScript that had been built in-house which extracted specific layers from the document, one-by-one. Additionally, we were maintaining low-res and high-res versions of the template for retina and standard displays.

The solution we decided to implement isn’t perfect, but it dramatically speeds up the processing step on our end, allows for a much more sensible organization of the component graphics within the document, and provides a buffer between what the station designer is doing on their end and what we’re doing on our end.

Buttons in the SpriteSheet

Buttons in the SpriteSheet

In order to ensure proper and meaningful organization of the template throughout the workflow, each graphical element we require from the stations is now represented as its own Smart Object. Each button, icon, background, etc is a distinct smart object. This way we can be sure that all changes being made to a graphic are in the appropriate and anticipated place. If our default icon is a single, raster layer but the station has a complex multi-layer vector replacement, there is no issue. Unlike the old version where new layers essentially went unaccounted for unless we were told explicitly that they had been added, now all the changes being made to each graphic are entirely encapsulated in a single object. This also allows us to set the canvas size on a per-graphic basis, so a designer doesn’t accidentally build a graphic that is too large for the space it will have in the final iPhone app.

Beyond that, the file is simply organized in a way such that relevant smart objects are in layer groups, so it’s easy for the designer to find what she is looking for. Icons are all in one place, backgrounds are in another, and status indicators another. All of the default objects are positioned roughly as they would appear in the final app inside a virtual iPhone, ensuring the designer is building the graphics in the correct context.

By using smart objects, we are also able to rethink the way the file is processed once it’s returned to PRX. Rather than run it through a script that is heavily dependent on specific layer names, we are now taking advantage of the dynamic Slice tool that Photoshop offers. You may be familiar with using Slice in Photoshop or Fireworks from when people were building websites with tables. It allows you to define contiguous regions of a file which can automatically be batch output, and additionally each slice can be given a unique name that persists inside the Photoshop file itself. The tool also allows slices to be dynamically linked to specific files in the file, and resizes to fit the contents of the layer. In our case, each smart object is a layer that has been linked to a slice, so if a station designer should choose to replace a default with an image that is larger or smaller, the slice we end up outputting is adjusted automatically.

Unfortunately, the slice tool is a relatively simplistic tool, and is only intended to work on the merged contents of the defined area. This is problematic, for instance, with toolbars, where we allow the designer to change both the background and the buttons. In our template we keep those graphics stacked on top of each other, just as they would appear in the app. The slice tool, though, would not handle that situation properly, and would export a single image rather than the separate parts.

Because we’re using smart objects, though, it’s trivial to duplicate each smart object somewhere else in the file where they can be arranged to prevent overlap. Duplicates of smart objects all point to the same source, so when the designer makes their changes in the part of the file where the layers are arranged to mimic the iPhone, the exploded view of the graphics we maintain behind the seems immediately reflects the changes.

Once we get the file back, we simply hide the designer-facing layers and bring up the matching layers we need for the batch export. A trip to the Save for Web dialog makes the export process take all of 10 seconds, orders of magnitudes faster than the old AppleScript. Since the names of the files being generated are being pulled from the slice metadata we ensure all of the station’s images will end up in the appropriate files we need to produce the iPhone app.

There are two caveats with this process. The first is that even though the the canvas size necessary for the user-facing layers is relatively small (technically just the resolution of a retina display), the actual file ends up being much larger. Because we need an unobstructed view of every single element, even when they are placed as closely together as possible the canvas ends up being about 4000×1500. This ends up being just empty space most of the time, and not really a major issue, but it is not ideal.

Because we will need to extract images through cropping, the canvas is much larger than the working area.

 

The other problem is something Photoshop is actually doing to give the user more control. Because smart objects can contain vector graphics, Photoshop allows them to be positioned at a sub-pixel level, even in the raster-based world of the actual Photoshop file. It does this even when the smart object contains only raster items itself. An unfortunate side effect of this is that unpredictable rendering can occur. Sometimes when the canvas height or width of a smart object is an odd number, Photoshop will try to center it inside the parent document, which places it halfway between real pixels. When that happens edge pixels of the smart object’s contents are improperly rendered (one edge is truncated a row early, and the edge pixels of the opposite edge are repeated). It’s not a hard problem to correct, but some care must be given since the entire document is based on smart objects.

Chris KalafarskiThe new Chris (aka Farski)

Chris Kalafarski posted on Thursday, April 7th, 2011 | Introductions | No Comments

(this is a crosspost from the fabulous prx blog)

I’ve landed here at PRX just a month after discovering the company. I was immediately intrigued by the work they are doing with lots of the public radio content I have grown to enjoy and trust, so when I saw their posting for a Rails developer I had to jump on the opportunity. My recent work history being filled with Rails applications focusing on presenting data to users in the most effective ways possible. I’ve reached where I am through some print and web design, film and photo production, and even some financial services. Staying on top of the latest ideas and trends in technology, and thinking of ways to use them to help improve the world around us is something I’m passionate about.

PRX has brought me back to Boston, after graduating from the business school at Boston College in 2004. In my time there I worked hard to understand where uses of technology in business and media worked, and where they were lacking or not being used to their full potential. It’s amazing to be part of a company like PRX, which is at the forefront of enacting change in the storied world of public radio, bringing in a fresh generation of listeners and redefining the way radio content is delivered. I hope that with my background in user experience design and development I can help PRX continue to excel at reshaping public radio.

When I’m away from the computer, I pass the time playing a wide variety of board games, being a freelance sports photographerstaying pretty active, or hanging out with my dog. I’m always up for a good movie or documentary, and finding a new best-band-ever happens more frequently than it probably should.

Rebecca NessonBuilding an iOS Application Platform

Rebecca Nesson posted on Monday, March 21st, 2011 | Git, iPhone, Mobile | No Comments

In the last several months PRX introduced a new product to stations: an iPhone public radio station app platform. In the upcoming months the first several apps built on this platform will begin to appear in the iTunes app store. The goal with these apps is to provide a highly customizable application that allows stations to showcase their content and their brand. The challenge for the stations is that they are on constrained budgets and cannot pay the large app development costs usually associated with a custom built iPhone app. The challenge for us to is provide them with a satisfying alternative that has a full feature set, a custom feel, and a price tag that they can afford. To meet this challenge we’ve developed a platform that abstracts as much data as possible out of the application and streamlines the maintenance and updates of shared code between our applications. Here’s a laundry list of the techniques we’re using to accomplish this.

The Server Backend

We use a rails application backend server to maintain the data about the stations. This includes the information about their programs and their schedules as well as other app-related data such as the content of various HTML-based views that appear in the app. We also use this backend to integrate with third-party providers of data for the app, such as Publink.com which provides access to some stations’ member or listener discount systems. Using a backend server application makes it easy to update data in the app without a new application release to the iTunes store, offers the potential of letting stations maintain their own data, and helps us to standardize the models and formats of the data on which the iPhone code relies.

App Customization using Inheritance

After removing as much of the data (and the variations in the data formats) from the phone as possible, there still remains the problem of how to provide a core set of features and native views for the iPhone app but also to give stations a lot of leeway to customize the way their app looks and functions and to maintain the ability to provide improvements to features without breaking or overwriting customizations. We’re using inheritance to solve this problem. Each model, view, and controller in our app has both a “Base” version and a “Custom” version where the Base version is the parent class of the Custom version. We develop all features in the Base versions and expect clients not to make customizations or other changes within these files. That way when we update a Base version of a class we can push the change into all of the apps without fear of overwriting a customization. Throughout the code outside of the class we refer only to the Custom version of a class so that any customizations made in the custom version will be used instead of the code in the Base. This allows a station to make small changes to a particular area of their app or even to completely redo the way a particular area of the application works.

Managing Improvements Using git branches

One of the trickiest parts of keeping the platform development manageable was figuring out how to maintain the code bases for each app in a way that allows easy integration of improvements over time. We’re using git (and GitHub) for this purpose. We maintain a master version of the code as a main branch. It includes all of the features and the Custom classes but no customizations. Each station app has its own branch based on the master branch but including all of the customizations. When I’m developing, I always work in a specific station’s branch. When I make changes to Custom classes or non-shared resources I commit them in the station branch. When I make a change to Base classes or other shared code or resources, I commit in the master branch and merge back into the branch I’m working on. When it’s time to go to work on another station’s app, I check out the branch and the first thing I do is merge with the master branch to pull in the latest Base changes.

A cool side effect of this process is that GitHub maintains a helpful view of how up-to-date a given app is with the changes to Base as well as how customized an a given app is. Here’s a snapshot of that view right now:

Customized Graphics with Photoshop Contact Sheets

It was important to us and to our clients to work with their own designers to create a their look for their apps. In order to do this we abstracted out as many common and reusable user interface elements as we could and created a Photoshop “contact sheet” for the app. This contact sheet provides templates of the graphics that are used throughout the app separated into a single layer per item. We provide this contact sheet to our clients’ designers and they replace the default graphics with their own designs. This allows stations to use the defaults where they like but also to come up with their own design and look for the app. We then created an AppleScript that exports all of the images and saves them with the file names the app expects. This keeps the designer’s role strictly to design and limits the amount of time we as developers have to spend to incorporate the graphics into the app.

Using .strings Files for Text Customization

One other facet of customization worth mentioning, although we still haven’t quite worked out the kinks in it, is the repurposing of iOS internationalization features to do text customization within the app. Rather than using literal strings for text throughout the app, we pull the strings out of .strings files. This allows stations to provide their own “translations” for each bit of text in the app without having to make a customization within the code. I call this customization method half-baked because when we add new strings to the app it is best if we can regenerate the strings files which will cause them to be regenerated using the defaults specified in the application’s code. To avoid this we could add new strings in new .strings files, but this would result in a proliferation of these files over time.

Chris RhodenGit Hooks and Ruby

Chris Rhoden posted on Wednesday, March 16th, 2011 | Git, iPhone, Ruby | 1 Comment

Happy Wednesday, everyone! I don’t have very much time this week, so I’ll keep this short and sweet: While we’ve historically used Subversion at PRX, we have recently been migrating to Git to take advantage of some awesome tools and to better interact with the community.

We’re also most comfortable with Ruby, so when I was asked to look into setting up a build server for our iOS apps, there wasn’t much question as to how I would do it.

I set up a bare git repository running on a spare MacBook Pro with XCode and found the commands that were necessary to run when a new build was ready to be deployed. The next step was to set up the appropriate hook for that Git repository so that the builds could be triggered by a push.

In the SVN world, this would be a post-commit hook, but because Git works differently (one push can contain many commits), the hook we are interested in is the post-receive hook. You can take a look in your .git/hooks directory for some samples, most of which are written in sh. We wanted something in ruby, and here’s what we came up with:

#!/usr/bin/env ruby

require 'rubygems'
require 'grit'

repo = Grit::Repo.new(File.join(File.dirname(__FILE__), '..','..'))
while msg = gets
  old_sha, new_sha, ref = msg.split(' ', 3)
  
  commit = repo.commit(new_sha)
  
  case ref
  when %r{^refs/heads/(.*)$}
    branch = $~[1]
    if old_sha.to_i(16) == 0
      
      # A new branch was pushed
    
    else
      
      # A branch was updated
      
    end
    
  when %r{^refs/tags/}
    tag_object = Grit::GitRuby::Repository.new(repo.path).get_object_by_sha1(new_sha)
    tag = tag_object.tag
    tag_message = tag_object.message
    if old_sha.to_i(16) == 0
      
      # A tag was created
      
    else
      
      # A tag was moved
      
    end
  end
end

Simply save this in your .git/hooks/post-receive file and make it executable. Then, every time you push to this remote, the script will execute. You can make whatever modifications are necessary for your specific application.

I hope this helps everyone working with Git hooks and Ruby!

Support Us!

PRX

Categories & projects

Archives