Wednesday, September 16, 2009

Missing Textures in OpenGL

I had an interesting problem the other day. In our OpenGL application, we had problems with our texture-mapping. For some reason, two of my textures would render fine, but upon introducing a third texture, one of the others would disappear. They would render as white squares (which makes sense, since we use a lot of square meshes). This drove me crazy for a couple of days, until I came across this post in the Apple discussion forum.

The problem is that I had failed to notice that the texture minification and magnification filters are associated with individual texture objects. I had wrongly assumed that they were associated with the whole OpenGL context and, as a result, was only setting them once. On my system, the default minification filter was GL_NEAREST_MIPMAP_LINEAR. Since I never set up any mipmaps, textures that were being scaled down weren't drawing at all.

Unfortunately, the man pages for glBindTexture and glTexParameter aren't completely clear about where the associated state is stored - in the GL context or in the texture object. There is this quote:

While a texture is bound, GL operations on the target to which it is bound affect the bound texture, and queries of the target to which it is bound return state from the bound texture.
Obviously, there are some exceptions to this rule (like glBindTexture itself), but it seems that we can derive a good rule of thumb. If an OpenGL command takes a texture target parameter (i.e. GL_TEXTURE_1D or GL_TEXTURE_2D), assume that it will affect the currently bound texture object unless you know otherwise. For example, glPrioritizeTextures takes a list of textures to prioritize, so it doesn't necessarily affect the currently bound texture.

It turns out that there is a lot of state that is stored on the texture object. My ancient copy of the OpenGL Red Book indicates that the texture object holds the raw image bytes, image format information, mipmaps, filters, wrapping modes, priority, and some other bits of data as well. It has been a long time since I have done any OpenGL programming. I don't remember if I ever really knew this (or if I was just lucky in the past).

So kids, remember; if you're not seeing your textures, check whether you have correctly configured the texture object.

Wednesday, August 26, 2009

Using git with Small Teams

For the time being, Gents with Beards is a small operation. As a result, we want simple solutions to our problems. One of the first problems we addressed was source control, and the solution we decided upon was git. For those who don't know, git is a source control system developed by Linus Torvalds to store the Linux kernel's source code. git is a good choice for distributed teams, but it is also an excellent choice for small teams who all sit in the same room. This post will describe our git setup and workflow. Before going further, however, I would be remiss if I didn't mention gitosis. I have not used it personally, but have heard many good things about it. You should definitely consider it if you plan to use git.

We use ssh as the transmission protocol of choice. We all use Macs, so it was possible to set up our git remotes with bonjour hostnames: cathode.local, macteague.local, and rivendell.local. Incidentally, I have set up user accounts for both of the other gents, but this has unintended ramifications. MacOS permissions are very permissive in the default configuration - but that's for another post. I also set their login shell to git-shell.

There is no central server; instead, we fetch directly from each other. As a result, there is no single authority for the one true version of the code. While that sounds scary, in practice it hasn't been a problem. When I want to implement a feature, I usually create a new branch. I make my changes in that branch, and then merge it back into my master branch. I then tell everybody else that they should fetch my changes. They will pull my master into their master (which more often than not turns into a simple fast forward). Thus, a few minutes after I finish a feature, it is shared with the other team members.

Occasionally, two people will complete features around the same time. When this happens, they each have master branches that have diverged. In this case, when we try to merge the different master branches together, a fast forward won't suffice. In this case, we end up with one additional merge to combine the masters. This hasn't really been a problem for us.

To get changes from others, we use git fetch. For example, to get changes from Marc, I would use:

git fetch marc
One could also use:
git fetch marc master
This would update my remotes/marc/master to point to his master commit, and would also fetch any necessary objects. However, when I fetch a particular branch, I only get updates to that branch. When you fetch without specifying a branch name, git will actually create or update entries in remotes/marc for every branch marc has. In a large team, this would be a terrible idea, but it works very well for us. If I decide that I want to work on the same feature, I can use:
git branch feature-branch-name marc/feature-branch-name
Now I have a local branch with the same name as Marc's branch. We can each make changes, fetch them from one machine to the other, merge the changes as necessary, and generally collaborate with ease. We will occasionally use git pull, but I like to separate fetching from merging.

We tag releases, and the tags end up getting shared with everybody when they fetch changes.

Something interesting about our workflow is that we never push changes. We will want to introduce a build server at some point, at which time we will need to have an authoritative copy of the code. Then, pushing will become more important.

One thing that disappointed me is that git doesn't seem to work as well when the machines aren't on the same network. At home, my laptop isn't internet routable, and neither are any of the other Gents' laptops. So, if I work on something at home, it's hard for me to send to the others. I could use patches, but that will cause history to be different between me and the others. I want to investigate git-bundle, which may do what I want.

Saturday, August 15, 2009

VariTone Released!

At 9:00pm tonight Gents with Beards received an email informing us that our first iPhone app, VariTone, has been released into the wild.

VariTone is a electronic diversion (okay, toy) for your iPhone or iPod touch.

It lets you record your voice and then play it back faster, slower, higher or lower. You can give yourself a fast chipmunk-like squeal, a slow growl, or even a pretty dead-on Fezzik impersonation.

It took longer than we thought, was harder than we imagined, but it's pretty exciting to see our efforts available on the app store.

You can go to our app store page with this link. We recommend that you buy at least seven or eight copies (you know, to be safe), and post embarrassingly lavish praise in a six star review :).

Tuesday, August 4, 2009

Need help with VariTone?

Having some problems with VariTone? Email us at feedback@gentswithbeards.com.

Some Common Problems:

Problem: I get an error message when I try to record audio with my iPod Touch

You'll need an external microphone in order to make recordings with an iPod Touch, since it doesn't have an internal mic. Dan at GWB uses the
Apple Earphones with Remote and Mic, but anything that is officially compatible with the iPod Touch should work just fine.

As we get more feedback on VariTone, we'll add more information to this post.


Sunday, June 28, 2009

-[AVAudioRecorder record] returns NO

The AVAudioPlayer class has been around for a while, but the AVAudioRecorder is new in the iPhone 3.0 SDK. Being so new, there is little information out there, which made it harder for us to debug this problem.

In our application, we use AVAudioRecorder for recording and OpenAL for playback. From some debugging and logging, it appeared that AVAudioRecorder would automatically switch the audio session category whenever it would start and stop recording. This matches what the Audio Session Programming Guide says. However, something about our OpenAL playback was breaking the AVAudioRecorder. We could record as many times as we wanted, but after we would play back once, recording wouldn't work. In particular, -[AVAudioRecorder record] was returning NO, which indicates that something went wrong. Unfortunately, the documentation doesn't (yet) explain what might cause that to be a problem.

It turns out that AVAudioRecorder does automatically switch your audio session category, but only when it feels like it. If you've already explicitly set the category, AVAudioRecorder will assume that you know what you're doing, and will not auto-switch categories anymore. The solution was to use AudioSessionSetProperty() to set the audio category to either kAudioSessionCategory_RecordAudio or kAudioSessionCategory_PlayAndRecord before starting the recording.

If you're only using AVAudioPlayer, AVAudioRecorder, and system sound services for audio playback, you shouldn't need to explicitly set a category. The only reason we did was because we were using OpenAL.

Now to be a little critical: I'm not a fan of this behavior. In order to make AVAudioRecorder "easy to use", Apple had to make its behavior complicated. It needs to know whether the user has explicitly set the audio session category. It sometimes implicitly changes the category. When it changes the category, the change is not observable (by callbacks registered with AudioSessionAddPropertyListener). All of these things are frustrating, but the real aggravation is that there is no way for us to implement a class like AVAudioRecorder. Because the class seems to make use of private APIs, its behavior cannot be duplicated. As far as I can see, there is no good reason for it to be using private APIs. Maybe it was just laziness. Whatever the case, it's disappointing.

On the other hand, things aren't all bad. AVAudioRecorder replaced about 4 source files in our application. That alone makes it worth bearing the annoyances. I just wish it were more... sane.

Tuesday, June 23, 2009

iPhone Background Processes?

So, quite accidentally, I managed to make an iPhone app that continued to run in the background. I had a controller action that was hooked up to a button. Clicking the button would play the sound using OpenAL. Now, I was more interested in seeing if the code worked than in trying to make it work well. As a result, the action handler blocked until the sound clip was over. Because of the nature of OpenAL, you need to poll an audio source to tell when it has finished playing. Furthermore, it seems that OpenAL gets confused if the audio route changes while it is playing back audio. In my case, all my audio sources got stuck in a "playing" state that never finished.

Since my prototype code didn't handle audio route changes, my polling loop turned into an infinite loop. With this loop blocking the primary run loop of my app, the quit message was never processed. The application's hosting process continued to run for quite a while. I could see output continue to get spewed to the debug log.

Now, this isn't a very practical solution. Blocking your process' run loop has other, bad side-effects. However, considering how paranoid Apple is about background processes and battery life, I was astounded that the OS didn't kill my obviously runaway process. Though the device itself doesn't seem to enforce the "no background processes" mandate, it likely that an app that did this would not make it past the App Store review process.

In my case, it was easy enough to fix up the code. I was waiting until the audio finished playing because I wanted to do some cleanup. It was easy enough to make a watchdog NSTimer that would occasionally poll my audio sources to see if they had finished playing and, if so, perform the cleanup.

Sunday, June 21, 2009

Simple audio recording and playback

So, the Gents have been toying with iPhone audio recording and playback, and it has been an uphill struggle, to say the least. SpeakHere is the defacto standard demo everyone references, but it is quite abhorrent in both code style and in ease of understanding. The underlying technology, Audio Queues, are already (overly?) complicated, I'd like my demos to help ease that complication, not make it worse, thankyouverymuch. And though I often hear that Apple's documentation is excellent, I'm less then impressed so far. They are worse than Microsoft at documenting edge cases, and sometimes I wonder if the tech writers think that developers care more about making pretty interfaces than understanding how to use the core API's that their software has to interact with.

But just today, I found out that Apple provides AVAudioRecorder and AVAudioPlayback as super easy to use wrappers around the AudioQueue stuff,... as long as your needs aren't too advanced. It won't help us for what we want to do on the audio playback side of things, but it is a hell of a lot easier to use and understand for audio recording. I slapped together a sample program based solely on the docs on about an hour, and it worked the first time (an unfortunately unique experience with iPhone audio programming so far)! Apparently, these classes sit in the sweet spot between wanting to play more than a few seconds of audio at a time and not being a manic control freak sound engineer who has to tweak every last parameter to perfection.

Here's my setup code for the recording (note the NSDictionary for settings, I was going to try to provide some since the docs weren't clear what would happen if I didn't, but it started getting complicated so I just tried nil, and it worked! gasp! Reasonable defaults? It was more complicated to set up the filepath than to record the audio!):




////////
// Make a url for the file in the apps documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *recordFilePath = [documentsDirectory stringByAppendingPathComponent: @"recordedFile.caf"];
NSURL *url = [NSURL fileURLWithPath:recordFilePath];
////////

NSDictionary *settings = nil;

NSError *error = 0;
recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];

if (error)
{
NSLog(@"An error occured while initializing the audio recorder! -- %@", error);
exit(-1);
}


And here's my record button code (gasp! It doesn't require 50+ lines just to record some audio?):



- (IBAction)record
{
NSLog(@"Record pressed");

if ([recorder isRecording])
{
NSLog(@"Stopping recording...");
[recorder stop];
}
else
{
NSLog(@"Starting recording...");
[recorder record];
}
}


The player setup code was even easier:



player = [AVAudioPlayer alloc];


and my play method:


- (IBAction)play
{
NSLog(@"Play pressed");

////////
// Make a url for the file in the apps documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *recordFilePath = [documentsDirectory stringByAppendingPathComponent: @"recordedFile.caf"];
NSURL *url = [NSURL fileURLWithPath:recordFilePath];
////////


NSError *error = 0;
[player initWithContentsOfURL:url error:&error];

if (error)
{
NSLog(@"An error occured while initializing the audio recorder! -- %@", error);
exit(-1);
}

[player play];
}


Yeah, I duplicated the file path setup code, it's experimental code, sue me. More importantly, notice that I broke with the objective-c [[X alloc] init] paradigm and went with something that feels 1000x hackier, calling init every time we are about to play. It seems that AVAudioPlayer wants an existing file when it is initialized, but since I may not have one recorded yet, it would error out.

Now, I'm not an objective-c programmer, this is all new to me, but what I did feels like the moral equivalent of pre-allocating some memory and calling placement new repeatedly without ever actually destroying the object, i.e. very very evil. In C++, all sorts of resource leaks may occur, but who knows in this wacky objective-c world? Apple's documentation doesn't explain what happens if init is called multiple times on AVAudioPlayer as far as I can tell, and that seems like a crucial piece of information.

Thursday, June 18, 2009

iPhone vs. Android: Initial Developer Experience

When the G1 came out, I tore into the Android SDK. I noted at the time that, compared to the iPhone, it appeared much easier and cheaper to get an application from the drawing board onto users' phones. Well, now that I've done some development for both, I can confirm that my suspicions were correct. There's just less hoopla with Android.

  1. The Android SDK, including beta versions, is available to everybody. You don't need to pay a fee to be a "special" developer with early access to the tools.
  2. Every Android phone out there, whether a retail phone or a developer phone, is ready for development. There is no provisioning, no additional fees, no nothing. You check a little checkbox in the phone's settings and you can deploy code to your device.
  3. You don't need to be on T-Mobile to develop for Android. Google will sell you a developer phone, T-Mobile will unlock their devices if you ask, and people are porting Android to various devices at an alarming rate. Another of The Gents would actually like to acquire a separate iPhone for development purposes (his sole iPhone doubles as his primary communication wedge), but he can't see any easy way to acquire one without signing yet another pact with AT&T.
  4. Testing on hardware isn't quite as necessary with Android. The Android SDK doesn't include a simulator, it includes an emulator. It runs a full Android system image on emulated hardware. This means that the binary you deploy to the emulator is the same binary that you deploy to an actual phone. The iPhone simulator, on the other hand, just runs x86 applications in a little window that looks like a phone. This is surprising since the iPhone itself has an ARM chip. This is particularly important when you run into strange, simulator-only problems.
  5. The Android Market is cheaper than the App Store for developers. With Android, it doesn't cost anything to deploy to devices. To sell apps in the Market, Google charges a one-time fee of $25. Apple charges $99 per year. That might not be bad for commercial software, but it must be murder for people who give their apps away for free.
  6. You don't need to go through Google. If you get rejected from the Android Market or just want to avoid it, you can distribute your app on your own. Users will be able to install it. While Google is the gatekeeper for the Market, they are not the gatekeeper for your phone. Apple, as everybody knows, has a seemingly broken application approval process. As a developer, you just need to pray that the approval die is cast in your favor. If not, you just wasted weeks of time. Though, I suppose you could just re-submit it and hope that you get luckier the second time around.

Of course, iPhone development isn't all bad. It's just so much harder and more expensive to get your first app running on your phone.

Wednesday, June 17, 2009

OpenAL in the iPhone Simulator

As somebody without an iPhone, I knew off the bat that there would be a barrier to developing iPhone apps. Fortunately, the SDK and simulator are free, so I figured I had a pretty good shot at getting started and, when it became necessary, I could invest in some hardware. It turns out that day was much closer than I thought.

In the app that The Gents have started to write, we knew we would want to use OpenAL. Apple has an OpenAL sample app that you can download and try. When I tried running it in the simulator, though, I got nothing. No audio. No app. It got as far as displaying default.png before giving up the ghost and crashing back to the home screen. I was left with some cryptic messages in the run log:

AQMEIOBase::DoStartIO: timeout
AQMEDevice::StartIO: AudioOutputUnitStart returned -66681
AUIOClient_StartIO failed (-66681)

Interestingly enough, the other two bearded ones had no problems with the simulator at all.

After much Googling, I was left with speculation but no real information. Some people believed that it was a bug that Apple was fixing. Others just asserted that sound doesn't work reliably in the simulator and that you need to test on hardware, period. One person had the exact same problem as me. I decided that day to bite the bullet and buy an iPod Touch. I got mine at Sam's club, went back to The Lab, and then remembered that I had to pay my $99 Apple tax to provision it for development. That process was not exactly instantaneous. With my spare time, I decided to refresh my aging Leopard install as well.

Long story short: re-installing from scratch worked. I didn't do an "Archive and Install" - I wiped the disk and restored files from a backup by hand. It was slow, laborious, but it worked. My theory is that some sound software that I had installed long ago did something nasty to my system configuration that never manifested until I stuck the iPhone Simulator on it.

In the end, I didn't need the iPod Touch, though I think I'm glad to have bought it. It gives us another platform to develop for and test on, which is pretty awesome. I can start to see what it's like to be an iPhone user, and I get to complain about Apple's treatment of the iPod touch as a second-class citizen. Everybody wins!

Introduction

With my first blog post, I thought it would be good to introduce myself. I've been developing software for... quite a while now. Recently, I've done a lot with Java and C#. I dabbled a little in Objective-C a few years back (in the 10.3 days), but never really finished anything. Recently, though, most of my hobbyist time has been going toward Android development. My primary phone is a G1, and I have no plans to switch to an iPhone. So I find myself feeling a bit like a spy deep in enemy territory. I'm going to take some pictures, draw some maps, and then sprint back across the border to friendlier turf... or something like that. More likely, I'll spend time comparing iPhone development to Android development.

Incidentally, I already have an established blog that you may want to check out. I'm going to try putting Apple/iPhone stuff here and other stuff there. There may be some cross-posting. People may be inconvenienced.

Wednesday, June 10, 2009

WWDC Keynote Thoughts

For immediate release:

First let me say that I really love a pseudo-press release. Don't hold back New York Times! You may include my words in your paper. What are the alternatives to "For immediate release"? Do you give people some kind of rules for when they can write about your press release if it isn't "immediate"? Can you rely a media organization to hold off on printing things by including magic words at the top of a document that YOU SENT TO THEM?

Yesterday was the keynote for Apple's World Wide Developer Conference. This has traditionally been Steve Job's duty, but fell to one of his droogs this time around.

While we at Gentlemen With Beards are Mac users and developers, I hope that we don't ever drift into the realm of fanboyism. While we may enjoy the occasional sip of Apple's Koolaide, we try taking big gulps. Another caveat worth mentioning is that Gentlemen With Beards are Mac/iPhone developers, so our bar for purchasing new Macs/iPhones is a lower than normal people.

This years WWDC keynote seemed to contain the typical amount of announcements. The executive summary: A new version of Mac OS/X coming soon, the Macbook laptops have been upgraded, the new version of the iPhone O/S is coming in June and there is a new version of the iPhone also coming out in June. The upgrades seem like steps in the right direction, but there isn't really anything that makes me want to throw my current Macbook/iPhone in the trash and run to the Apple store.

The first announcement regarded the next iteration of the Macbook laptops. As usual, the processors and ram have been upgraded. Apple is really touting their new laptop screen as having 60% more color gamut than the current screen. This sounds good, but I'll have to see it before I can let that influence a purchasing decision. They've added firewire back to the 13" Macbook (which is now called the 13" Macbook pro). Not having firewire on the current generation of Macbooks seemed like a tactical mistake (even though I don't use it myself), and Apple seems to agree. They've also added an SD card slot, which seems like a useful addon. Backlit keyboards are now standard on all Macbooks(shrug). If you have a unibody Macbook already, I can't imagine trading it in for the new models unless you can find someone willing to pay almost full price for your used laptop. People with older Macbooks or older Macbook Pros may see this as a good time to upgrade.

Apple also demoed the new version of the Mac OS/X, Snow Leopard. Snow Leopard seems to be largely a bug-fixing release, but many of the built-in apps have been reworked. What makes me excited about this new relase is that they've dropped the price from the old standard $129 (ouch), to a very palatable $29. At that price, I can't think of any reason to not upgrade to Snow Leopard in September when it's released.

Current iPhone owners will find the OS 3.0 update hitting their phones around June 17th. Apple has touted over 1000 new APIs in the new version. I don't want to be a downer or anything, but I have to mention that even big geeks like myself don't think of upgrades in terms of number of new APIs. Anyway, many of the painful missing features in the iPhone are going to appear in this update. Features like cut and paste, MMS (well, in late summer for us AT&T users), and push notification capability are all present in 3.0. Apple has also reworked mobile safari to improve its performance. Having tried out he release candidate, I can say that I have really appreciated the "whole phone" search capability.

The final big announcement was the new iPhone 3G S. That they've added an S to the end of the iPhone instead of calling it the iPhone 2 gives you a quick idea of the level of change to the iPhone. The 3G S has a faster processor than the current iPhone, a video/autofocus still camera, and voice control capability. They've also doubled the amount of storage on both models, while keeping the prices at the same point: $199 for 16GB and $299 for 32GB. The current 8GB iPhone 3G will have its price lowered to $99. Is it worth an upgrade? I don't know.

If you don't have an iPhone, $99 is a pretty good deal if your current cellphone contract is expired. Likewise, new features for the same price is good, but I don't think they're $400-500 good if you're in the midst of your current cellphone contract. Most iPhone 3G customers have at least 6-7 months before they hit AT&Ts 18month upgrade eligibility point. Unless you've got money to burn or someone else is paying for it, I would recommend waiting at least until you can upgrade your phone for $200.

There you have it. All of the upgraded lines have moved forward, but the only people who need to move quickly are people who bought their Macbooks or iPhones a few days ago.



Welcome to Gents with Beards

Well well well. Eric claimed he would be making a post here shortly, but let's just see if I can't claim the honor spot of first post.

Welcome to Gents with Beards! Purveyors of fine software, tonics, elixirs, and embedded applications! Witness opinionated gentry down on their luck gathering together to explore what the iPhone is capable of and perhaps find a way to earn a jitney while we're at it, but most of all, to have a grand time learning a new technology.