Wednesday, September 16, 2009

Missing Textures in OpenGL

I had an interesting problem the other day. In our OpenGL application, we had problems with our texture-mapping. For some reason, two of my textures would render fine, but upon introducing a third texture, one of the others would disappear. They would render as white squares (which makes sense, since we use a lot of square meshes). This drove me crazy for a couple of days, until I came across this post in the Apple discussion forum.

The problem is that I had failed to notice that the texture minification and magnification filters are associated with individual texture objects. I had wrongly assumed that they were associated with the whole OpenGL context and, as a result, was only setting them once. On my system, the default minification filter was GL_NEAREST_MIPMAP_LINEAR. Since I never set up any mipmaps, textures that were being scaled down weren't drawing at all.

Unfortunately, the man pages for glBindTexture and glTexParameter aren't completely clear about where the associated state is stored - in the GL context or in the texture object. There is this quote:

While a texture is bound, GL operations on the target to which it is bound affect the bound texture, and queries of the target to which it is bound return state from the bound texture.
Obviously, there are some exceptions to this rule (like glBindTexture itself), but it seems that we can derive a good rule of thumb. If an OpenGL command takes a texture target parameter (i.e. GL_TEXTURE_1D or GL_TEXTURE_2D), assume that it will affect the currently bound texture object unless you know otherwise. For example, glPrioritizeTextures takes a list of textures to prioritize, so it doesn't necessarily affect the currently bound texture.

It turns out that there is a lot of state that is stored on the texture object. My ancient copy of the OpenGL Red Book indicates that the texture object holds the raw image bytes, image format information, mipmaps, filters, wrapping modes, priority, and some other bits of data as well. It has been a long time since I have done any OpenGL programming. I don't remember if I ever really knew this (or if I was just lucky in the past).

So kids, remember; if you're not seeing your textures, check whether you have correctly configured the texture object.

Wednesday, August 26, 2009

Using git with Small Teams

For the time being, Gents with Beards is a small operation. As a result, we want simple solutions to our problems. One of the first problems we addressed was source control, and the solution we decided upon was git. For those who don't know, git is a source control system developed by Linus Torvalds to store the Linux kernel's source code. git is a good choice for distributed teams, but it is also an excellent choice for small teams who all sit in the same room. This post will describe our git setup and workflow. Before going further, however, I would be remiss if I didn't mention gitosis. I have not used it personally, but have heard many good things about it. You should definitely consider it if you plan to use git.

We use ssh as the transmission protocol of choice. We all use Macs, so it was possible to set up our git remotes with bonjour hostnames: cathode.local, macteague.local, and rivendell.local. Incidentally, I have set up user accounts for both of the other gents, but this has unintended ramifications. MacOS permissions are very permissive in the default configuration - but that's for another post. I also set their login shell to git-shell.

There is no central server; instead, we fetch directly from each other. As a result, there is no single authority for the one true version of the code. While that sounds scary, in practice it hasn't been a problem. When I want to implement a feature, I usually create a new branch. I make my changes in that branch, and then merge it back into my master branch. I then tell everybody else that they should fetch my changes. They will pull my master into their master (which more often than not turns into a simple fast forward). Thus, a few minutes after I finish a feature, it is shared with the other team members.

Occasionally, two people will complete features around the same time. When this happens, they each have master branches that have diverged. In this case, when we try to merge the different master branches together, a fast forward won't suffice. In this case, we end up with one additional merge to combine the masters. This hasn't really been a problem for us.

To get changes from others, we use git fetch. For example, to get changes from Marc, I would use:

git fetch marc
One could also use:
git fetch marc master
This would update my remotes/marc/master to point to his master commit, and would also fetch any necessary objects. However, when I fetch a particular branch, I only get updates to that branch. When you fetch without specifying a branch name, git will actually create or update entries in remotes/marc for every branch marc has. In a large team, this would be a terrible idea, but it works very well for us. If I decide that I want to work on the same feature, I can use:
git branch feature-branch-name marc/feature-branch-name
Now I have a local branch with the same name as Marc's branch. We can each make changes, fetch them from one machine to the other, merge the changes as necessary, and generally collaborate with ease. We will occasionally use git pull, but I like to separate fetching from merging.

We tag releases, and the tags end up getting shared with everybody when they fetch changes.

Something interesting about our workflow is that we never push changes. We will want to introduce a build server at some point, at which time we will need to have an authoritative copy of the code. Then, pushing will become more important.

One thing that disappointed me is that git doesn't seem to work as well when the machines aren't on the same network. At home, my laptop isn't internet routable, and neither are any of the other Gents' laptops. So, if I work on something at home, it's hard for me to send to the others. I could use patches, but that will cause history to be different between me and the others. I want to investigate git-bundle, which may do what I want.

Saturday, August 15, 2009

VariTone Released!

At 9:00pm tonight Gents with Beards received an email informing us that our first iPhone app, VariTone, has been released into the wild.

VariTone is a electronic diversion (okay, toy) for your iPhone or iPod touch.

It lets you record your voice and then play it back faster, slower, higher or lower. You can give yourself a fast chipmunk-like squeal, a slow growl, or even a pretty dead-on Fezzik impersonation.

It took longer than we thought, was harder than we imagined, but it's pretty exciting to see our efforts available on the app store.

You can go to our app store page with this link. We recommend that you buy at least seven or eight copies (you know, to be safe), and post embarrassingly lavish praise in a six star review :).

Tuesday, August 4, 2009

Need help with VariTone?

Having some problems with VariTone? Email us at feedback@gentswithbeards.com.

Some Common Problems:

Problem: I get an error message when I try to record audio with my iPod Touch

You'll need an external microphone in order to make recordings with an iPod Touch, since it doesn't have an internal mic. Dan at GWB uses the
Apple Earphones with Remote and Mic, but anything that is officially compatible with the iPod Touch should work just fine.

As we get more feedback on VariTone, we'll add more information to this post.


Sunday, June 28, 2009

-[AVAudioRecorder record] returns NO

The AVAudioPlayer class has been around for a while, but the AVAudioRecorder is new in the iPhone 3.0 SDK. Being so new, there is little information out there, which made it harder for us to debug this problem.

In our application, we use AVAudioRecorder for recording and OpenAL for playback. From some debugging and logging, it appeared that AVAudioRecorder would automatically switch the audio session category whenever it would start and stop recording. This matches what the Audio Session Programming Guide says. However, something about our OpenAL playback was breaking the AVAudioRecorder. We could record as many times as we wanted, but after we would play back once, recording wouldn't work. In particular, -[AVAudioRecorder record] was returning NO, which indicates that something went wrong. Unfortunately, the documentation doesn't (yet) explain what might cause that to be a problem.

It turns out that AVAudioRecorder does automatically switch your audio session category, but only when it feels like it. If you've already explicitly set the category, AVAudioRecorder will assume that you know what you're doing, and will not auto-switch categories anymore. The solution was to use AudioSessionSetProperty() to set the audio category to either kAudioSessionCategory_RecordAudio or kAudioSessionCategory_PlayAndRecord before starting the recording.

If you're only using AVAudioPlayer, AVAudioRecorder, and system sound services for audio playback, you shouldn't need to explicitly set a category. The only reason we did was because we were using OpenAL.

Now to be a little critical: I'm not a fan of this behavior. In order to make AVAudioRecorder "easy to use", Apple had to make its behavior complicated. It needs to know whether the user has explicitly set the audio session category. It sometimes implicitly changes the category. When it changes the category, the change is not observable (by callbacks registered with AudioSessionAddPropertyListener). All of these things are frustrating, but the real aggravation is that there is no way for us to implement a class like AVAudioRecorder. Because the class seems to make use of private APIs, its behavior cannot be duplicated. As far as I can see, there is no good reason for it to be using private APIs. Maybe it was just laziness. Whatever the case, it's disappointing.

On the other hand, things aren't all bad. AVAudioRecorder replaced about 4 source files in our application. That alone makes it worth bearing the annoyances. I just wish it were more... sane.

Tuesday, June 23, 2009

iPhone Background Processes?

So, quite accidentally, I managed to make an iPhone app that continued to run in the background. I had a controller action that was hooked up to a button. Clicking the button would play the sound using OpenAL. Now, I was more interested in seeing if the code worked than in trying to make it work well. As a result, the action handler blocked until the sound clip was over. Because of the nature of OpenAL, you need to poll an audio source to tell when it has finished playing. Furthermore, it seems that OpenAL gets confused if the audio route changes while it is playing back audio. In my case, all my audio sources got stuck in a "playing" state that never finished.

Since my prototype code didn't handle audio route changes, my polling loop turned into an infinite loop. With this loop blocking the primary run loop of my app, the quit message was never processed. The application's hosting process continued to run for quite a while. I could see output continue to get spewed to the debug log.

Now, this isn't a very practical solution. Blocking your process' run loop has other, bad side-effects. However, considering how paranoid Apple is about background processes and battery life, I was astounded that the OS didn't kill my obviously runaway process. Though the device itself doesn't seem to enforce the "no background processes" mandate, it likely that an app that did this would not make it past the App Store review process.

In my case, it was easy enough to fix up the code. I was waiting until the audio finished playing because I wanted to do some cleanup. It was easy enough to make a watchdog NSTimer that would occasionally poll my audio sources to see if they had finished playing and, if so, perform the cleanup.

Sunday, June 21, 2009

Simple audio recording and playback

So, the Gents have been toying with iPhone audio recording and playback, and it has been an uphill struggle, to say the least. SpeakHere is the defacto standard demo everyone references, but it is quite abhorrent in both code style and in ease of understanding. The underlying technology, Audio Queues, are already (overly?) complicated, I'd like my demos to help ease that complication, not make it worse, thankyouverymuch. And though I often hear that Apple's documentation is excellent, I'm less then impressed so far. They are worse than Microsoft at documenting edge cases, and sometimes I wonder if the tech writers think that developers care more about making pretty interfaces than understanding how to use the core API's that their software has to interact with.

But just today, I found out that Apple provides AVAudioRecorder and AVAudioPlayback as super easy to use wrappers around the AudioQueue stuff,... as long as your needs aren't too advanced. It won't help us for what we want to do on the audio playback side of things, but it is a hell of a lot easier to use and understand for audio recording. I slapped together a sample program based solely on the docs on about an hour, and it worked the first time (an unfortunately unique experience with iPhone audio programming so far)! Apparently, these classes sit in the sweet spot between wanting to play more than a few seconds of audio at a time and not being a manic control freak sound engineer who has to tweak every last parameter to perfection.

Here's my setup code for the recording (note the NSDictionary for settings, I was going to try to provide some since the docs weren't clear what would happen if I didn't, but it started getting complicated so I just tried nil, and it worked! gasp! Reasonable defaults? It was more complicated to set up the filepath than to record the audio!):




////////
// Make a url for the file in the apps documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *recordFilePath = [documentsDirectory stringByAppendingPathComponent: @"recordedFile.caf"];
NSURL *url = [NSURL fileURLWithPath:recordFilePath];
////////

NSDictionary *settings = nil;

NSError *error = 0;
recorder = [[AVAudioRecorder alloc] initWithURL:url settings:settings error:&error];

if (error)
{
NSLog(@"An error occured while initializing the audio recorder! -- %@", error);
exit(-1);
}


And here's my record button code (gasp! It doesn't require 50+ lines just to record some audio?):



- (IBAction)record
{
NSLog(@"Record pressed");

if ([recorder isRecording])
{
NSLog(@"Stopping recording...");
[recorder stop];
}
else
{
NSLog(@"Starting recording...");
[recorder record];
}
}


The player setup code was even easier:



player = [AVAudioPlayer alloc];


and my play method:


- (IBAction)play
{
NSLog(@"Play pressed");

////////
// Make a url for the file in the apps documents directory
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
NSString *recordFilePath = [documentsDirectory stringByAppendingPathComponent: @"recordedFile.caf"];
NSURL *url = [NSURL fileURLWithPath:recordFilePath];
////////


NSError *error = 0;
[player initWithContentsOfURL:url error:&error];

if (error)
{
NSLog(@"An error occured while initializing the audio recorder! -- %@", error);
exit(-1);
}

[player play];
}


Yeah, I duplicated the file path setup code, it's experimental code, sue me. More importantly, notice that I broke with the objective-c [[X alloc] init] paradigm and went with something that feels 1000x hackier, calling init every time we are about to play. It seems that AVAudioPlayer wants an existing file when it is initialized, but since I may not have one recorded yet, it would error out.

Now, I'm not an objective-c programmer, this is all new to me, but what I did feels like the moral equivalent of pre-allocating some memory and calling placement new repeatedly without ever actually destroying the object, i.e. very very evil. In C++, all sorts of resource leaks may occur, but who knows in this wacky objective-c world? Apple's documentation doesn't explain what happens if init is called multiple times on AVAudioPlayer as far as I can tell, and that seems like a crucial piece of information.