iPhone Top Tips is for developers who are working on iPhone Application Development or Games Development. I post here my past experience and new techniques which will useful for you guys.
Friday, November 12, 2010
Cocos2d Customize Labels and Fonts
(Please note that from cocos2d Version .7+ on, the label is added to it's layer via addChild: and not add: e.g. [self addChild:myLabel];)
Pros and Cons of TTF labels: ( CCLabel )
*
+ All the pros of TTF fonts: any size, kerning support, etc.
*
+ Easy to use. No need to use an external editor.
*
- The creation/update is very slow since a new texture will be created
Pros and Cons of texture atlas labels: ( CCLabelAtlas, CCBitmapFontAtlas )
*
+ The creation / update is very fast, since they don't create a new texture.
*
+ Fonts can be customized (shadows, gradients, blur, etc)
*
- Depends on external editors: AngelCode / Hiero editor, GIMP / Photoshop
Creating labels: Simple way
The easiest way to create a label is by using the CCLabel object. Example:
CCLabel *label = [CCLabel labelWithString:@"Hello World" fontName:@"Marker Felt" fontSize:24];
[self add: label];
fontName is the TTF font name to be used.
You can use your own custom TTF file. You just need to add the .ttf file to the project. Example of custom TTF file:
CCLabel *label = [CCLabel labelWithString:@"Hello World" fontName:@"Schwarzwald Regular" fontSize:24];
[self add: label];
*
cocos2d will try to load the font trying to use the FontLabel library.
*
If it fails it will use the UIFont class
Important: The size of the OpenGL texture will be automatically calculated based on the font size and font name.
Creating labels: Complex way
You can also create textures using this API:
CCLabel *left = [CCLabel labelWithString:@"Hello World" dimensions:CGSizeMake(480,50) alignment:UITextAlignmentLeft fontName:@"Marker Felt" fontSize:32];
[self add: left];
If you use this way, you should pass the dimension of OpenGL texture to be used. If the texture is not big enough, then only some parts of the label will be rendered.
Possible alignments:
*
UITextAlignmentLeft (left alignment)
*
UITextAlignmentCenter (center alignment)
*
UITextAlignmentRight (right alignment)==== Updating ====
Like any object that implements the CCLabelProtocol protocol you can update it using the setString method. Example:
[label setString: @"Hello World 2"];
Important: Every time you call setString a NEW OpenGL texture will be created. This means that setString is as slow as creating a new CCLabel. So, DO NOT use CCLabel objects if you need to update them frequently. Instead use CCLabelAtlas or CCBitmapFontAtlas.
Color
You can change the color of your fonts by simply calling the color parameter like so:
label.color = ccc3(0,0,0);
//or
label.color = ccc4(0,0,0,0);
ccc3 Example Colors:
white - (255,255,255)
black - (0,0,0)
blue - (0,0,255)
green- (0,255,0)
red - (255,0,0)
Grey – (84,84,84)
Brown – (165,42,42)
Pink – (255,192,203)
Purple – (160,32,240)
Yellow – (255,255,0)
Gold – (255,215,0)
Alignment
If you want to modify the alignment you can use the anchorPoint property. Example:
//left alignment
[label setAnchorPoint: ccp(0, 0.5f)];
// right alignment
[label setAnchorPoint: ccp(1, 0.5f)];
// center aligment (default)
[label setAnchorPoint: ccp(0.5f, 0.5f)];
Texture Atlas labels
There are 2 types of labels based on texture atlas:
*
CCBitmapFontAtlas
*
CCLabelAtlas
Introduction
The CCBitmapFontAtlas is the suggested way to create fast labels since:
*
The bitmap (image) can be customized with the editors
*
You can update/init the label without penalty
*
It is very flexible. Each letter of the label can be treated like an CCSprite
*
It has kerning support
The CCBitmapFontAtlas label parses the Angel Code Font format to create a label. To create these kind of labels, you can use any of these editors:
*
http://www.n4te.com/hiero/hiero.jnlp (java version)
*
http://slick.cokeandcode.com/demos/hiero.jnlp (java version)
*
http://www.angelcode.com/products/bmfont/ (windows only)
Java editors vs. Windows editor:
*
The Windows editor is the official Angel Code editor
*
Java editors: run on Mac
*
Java editors: have additional features like shadow, gradient, blur
Creating a BitmapFontAtlas
To create a CCBitmapFontAtlas object you need to do:
CCBitmapFontAtlas *label = [CCBitmapFontAtlas bitmapFontAtlasWithString:@"Hello World" fntFile:@"bitmapFontTest.fnt"];
[self add:label]
Manipulating each character
Since CCBitmapFontAtlas is a subclass of CCSpriteSheet you can manipulate each character as an CCSprite. The 1st character will be added with tag = 0, the 2nd character will be added with tag=1, and so on. Example:
CCBitmapFontAtlas *label = [CCBitmapFontAtlas bitmapFontAtlasWithString:@"Bitmap Font Atlas" fntFile:@"bitmapFontTest.fnt"];
CCSprite *char_B = (CCSprite*) [label getChildByTag:0]; // character 'B'
CCSprite *char_m = (CCSprite*) [label getChildByTag:3]; // character 'm'
LabelAtlas
Introduction
CCLabelAtlas was the 1st fast label added into cocos2d. But it was superseded by CCBitmapFontAtlas. It is being maintained for backwards compatibility, but you should use CCBitmapFontAtlas instead.
Creating a LabelAtlas
CCLabelAtlas *label = [CCLabelAtlas labelAtlasWithString:@"Hello World" charMapFile:@"tuffy_bold_italic-charmap.png" itemWidth:48 itemHeight:64 startCharMap:' '];
[self add:label];
*
charMapFile is an image file that contains all the characters. Each character should be ordered according to its ASCII value and the image can't contain more than 256 characters.
*
itemWidth is the width of the characters in pixels
*
itemHeight is the height of the characters in pixels
*
startCharMap is the first character of the map.
Updating a LabelAtlas / BitmapFontAtlas
Like any object that implements the CCLabelProtocol protocol you can update it using the setString method.
[label setString:@"Hello World 2"];
It is worth noting that updating a CCLabelAtlas or a CCBitmapFontAtlas has almost no penalty.
Alignment in LabelAtlas / BitmapFontAtlas
If you want to modify the alignment you can use the anchorPoint property. Example:
//left alignment
[label setAnchorPoint: ccp(0, 0.5f)];
// right alignment
[label setAnchorPoint: ccp(1, 0.5f)];
// center aligment (default)
[label setAnchorPoint: ccp(0.5f, 0.5f)];
Tuesday, November 9, 2010
Implement Push Notification in iPhone Game or Application
As part of the product, we have an iPhone application that includes push notifications as an alerting option so you can be notified via push direct to your iPhone when one of your server alerts have been triggered. This is useful since our app can then be launched to instantly see the details of the server that has caused the alert.
Apple provides detailed code documentation for the iPhone OS code that is needed to implement and handle the alerts on the device but only provides a higher level guide for the provider server side.
As a provider, you need to communicate with the Apple Push Notification Service (APNS) to send the messages that are then pushed to the phone. This is necessary so that the device only needs to maintain 1 connection to the APNS, helping to reduce battery usage.
This tutorial will go into code-level detail about how we built our push notification provider server to allow us to interact with the APNS and use the push notifications with our server monitoring iPhone application. Since we develop in PHP, our examples will be in PHP 5.
Basic Structure
1. You connect to the APNS using your unique SSL certificate
2. Cycle through the messages you want to send (or just send 1 if you only have 1)
3. Construct the payload for each message
4. Disconnect from APNS
The flow of remote-notification data is one-way. The provider composes a notification package that includes the device token for a client application and the payload. The provider sends the notification to APNs which in turn pushes the notification to the device.
Restrictions
* The payload is limited to 256 bytes in total – this includes both the actual body message and all of the optional and additional attributes you might wish to send. Push notifications are not designed for large data transfer, only for small alerts. For example we only send a short alert message detailing the server monitoring alert triggered.
* APNS does not provide any status feedback as to whether your message was successfully delivered. One reason for this is that messages are queued to be sent to the device if it is unreachable, however only the last sent message will be queued – overwriting any previously sent but undelivered messages.
* Push notifications should not be used for critical alerts because the message will only be delivered if the device has wifi or cellular connectivity, which is why we recommend combining push with another alerting method such as e-mail or SMS for our server monitoring alerts.
* The SSL certificates used to communicate with APNS, discussed below, are generated on an application level. The implementation discussed in this tutorial only concerns a single iPhone application so if you have several, you will need to adapt the code to use the appropriate certificate(s) where necessary.
Device Token
Each push message must be “addressed” to a specific device. This is achieved by using a unique deviceToken generated by APNS within your iPhone application. Once this token has been retrieved, you need to store it on your server, not within your iPhone application itself. It looks something like this:
c9d4c07c fbbc26d6 ef87a44d 53e16983 1096a5d5 fd825475 56659ddd f715defc
For the Server Density iPhone application, we call the necessary generation methods on app launch and pass it back to our servers via an HTTP API call. This stores the deviceToken in a database on our servers for that user so we can then communicate with the device linked to that user.
Feedback Service
Apple provide a feedback service which you are supposed to occasionally poll. This will provide a list of deviceTokens that were previously but are no longer valid, such as if the user has uninstalled your iPhone application. You can then remove the deviceToken from your database so you do not communicate with an invalid device.
Using the feedback service is not covered by this tutorial.
Certificates
The first thing you need is your Push certificates. These identify you when communicating with APNS over SSL.
Generating the Apple Push Notification SSL certificate on Mac:
1. Log in to the iPhone Developer Connection Portal and click App IDs
2. Ensure you have created an App ID without a wildcard. Wildcard IDs cannot use the push notification service. For example, our iPhone application ID looks something like AB123346CD.com.serverdensity.iphone
3. Click Configure next to your App ID and then click the button to generate a Push Notification certificate. A wizard will appear guiding you through the steps to generate a signing authority and then upload it to the portal, then download the newly generated certificate. This step is also covered in the Apple documentation.
4. Import your aps_developer_identity.cer into your Keychain by double clicking the .cer file.
5. Launch Keychain Assistant from your local Mac and from the login keychain, filter by the Certificates category. You will see an expandable option called “Apple Development Push Services”
6. Expand this option then right click on “Apple Development Push Services” > Export “Apple Development Push Services ID123″. Save this as apns-dev-cert.p12 file somewhere you can access it.
7. Do the same again for the “Private Key” that was revealed when you expanded “Apple Development Push Services” ensuring you save it as apns-dev-key.p12 file.
8. These files now need to be converted to the PEM format by executing this command from the terminal:
openssl pkcs12 -clcerts -nokeys -out apns-dev-cert.pem -in apns-dev-cert.p12
openssl pkcs12 -nocerts -out apns-dev-key.pem -in apns-dev-key.p12
9. If you wish to remove the passphrase, either do not set one when exporting/converting or execute:
openssl rsa -in apns-dev-key.pem -out apns-dev-key-noenc.pem
10. Finally, you need to combine the key and cert files into a apns-dev.pem file we will use when connecting to APNS:
cat apns-dev-cert.pem apns-dev-key-noenc.pem > apns-dev.pem
It is a good idea to keep the files and give them descriptive names should you need to use them at a later date. The same process above applies when generating the production certificate.
Payload Contents
The payload is formatted in JSON, compliant with the RFC 4627 standard. It consists of several parts:
* Alert – the text string to display on the device
* Badge – the integer number to display as a badge by the application icon on the device home screen
* Sound – the text string of the name of the sound to accompany the display of the message on the device
* This tutorial will only deal with the basics by sending a simple alert text string but this can also be another dictionary containing various options to display custom buttons and the like.
Creating the payload
Using PHP it is very easy to create the payload based on an array and convert it to JSON:
$payload['aps'] = array('alert' => 'This is the alert text', 'badge' => 1, 'sound' => 'default');
$payload = json_encode($payload);
Echoing the contents of $payload would show you the JSON string that can be sent to APNS:
{
"aps" : { "alert" : "This is the alert text", "badge" : 1, "sound" : "default" }
}
This will cause a message to be displayed on the device, trigger the default alert sound and place a “1″ in the badge by the application icon. The default buttons “Close” and “View” would also appear on the alert that pops up.
For the Server Density server monitoring iPhone application, it is important for the user to be able to tap “View” and go directly to the server that generated the alert. To do this, we add an extra dictionary in of our own custom values:
$payload['aps'] = array('alert' => 'This is the alert text', 'badge' => 1, 'sound' => 'default');
$payload['server'] = array('serverId' => $serverId, 'name' => $name);
$output = json_encode($payload);
The custom dictionary server is passed to the application on the device when the user taps “View” so we can load the right server. The JSON looks like this:
{
"aps" : { "alert" : "This is the alert text", "badge" : 1, "sound" : "default" },
"server" : { "serverId" : 1, "name" : "Server name")
}
The size limit of 256 bytes applies to this entire payload, including any custom dictionaries.
The raw interface
Once an alert is generated within Server Density, the payload is built and then inserted into a queue. This is processed separately so that we can send multiple payloads in one go if necessary.
Apple recommends this method because if you are constantly connecting and disconnecting to send each payload, APNS may block your IP.
As described by Apple:
The raw interface employs a raw socket, has binary content, is streaming in nature, and has zero acknowledgment responses.
Opening the connection
The PHP 5 code to open the connection looks like this:
$apnsHost = 'gateway.sandbox.push.apple.com';
$apnsPort = 2195;
$apnsCert = 'apns-dev.pem';
$streamContext = stream_context_create();
stream_context_set_option($streamContext, 'ssl', 'local_cert', $apnsCert);
$apns = stream_socket_client('ssl://' . $apnsHost . ':' . $apnsPort, $error, $errorString, 2, STREAM_CLIENT_CONNECT, $streamContext);
If an error has occurred you can pick up the error message from $errorString. This will also contain the details if your SSL certificate is not correct.
The certificate file is read in relative to the current working directory of the executing PHP script, so specify the full absolute path to your certificate if necessary.
Note that when testing you must use the sandbox with the development certificates. The production hostname is gateway.push.apple.com and must use the separate and different production certificate.
Sending the payload
At this point, the code we use loops through all the queued payloads and sends them. Constructing the binary content to send to APNS is simple:
$apnsMessage = chr(0) . chr(0) . chr(32) . pack('H*', str_replace(' ', '', $deviceToken)) . chr(0) . chr(strlen($payload)) . $payload;
fwrite($apns, $apnsMessage);
Note that the $deviceToken is included from our database and stripped of the spaces it is provided with by default. We also include a check to send an error to us in the event that the $payload is over 256 bytes.
$apnsMessage contains the correctly binary formatted payload and the fwrite call writes the payload to the currently active streaming connection we opened previously, contained in $apns.
Once completed, you can close the connection:
socket_close($apns);
fclose($apns);
php-apns
There is a free, open source server library that does all the above functionality called php-apns. We chose to implement it ourselves because it has a further dependancy on memcached, we do not want to rely on 3rd party code for large and critical aspects of our code-base and I am apprehensive about the suitability of PHP for running a continuous server process. We do all the above queue processing using our own custom cron system which runs every few seconds – that way PHP scripts do not need to be run as processes, something I’m not sure they were designed to do!
All done
That’s it! If you have any problems, post in the comments below.
Friday, November 5, 2010
Game Engine and Games Framework List
These engines are available for free use, but without the source code being available under an open source license. Many of these engines are commercial products which have a free edition available for them:
* Adventure Game Studio — Mainly used to develop third-person pre-rendered adventure games, this engine is one of the most popular for developing amateur adventure games.
* Cocos2d— A 2d game engine for making iphone games.
* DikuMUD and derivatives — MUD engines
* dim3 — Freeware 3D javascript engine for the Mac (although finished games are cross platform).
* DX Studio — A freeware 3D game engine with complete tools for 3D video game development. Upgrading to paid licenses would unlock extra features.
* Game Maker Lite — Object-oriented game development software with a scripting language as well as a drag-and-drop interface.
* LPMud and derivatives (including MudOS and FluffOS) — MUD engines
* MUSH — MU* engine
* M.U.G.E.N — A 2D fighting game engine.
* Open Scene Graph — An open source 3D graphics toolkit, used by application developers in fields such as visual simulation, computer games, virtual reality, scientific visualization and modelling.
* Panda3D — (Releases prior to May 28, 2008) A relatively easy to use C++ game engine with Python bindings that was made by Disney and is owned by Carnegie Mellon University. Disney uses it to produce some of their games.
* Platinum Arts Sandbox Free 3D Game Maker — Open source and based on the Cube 2: Sauerbraten engine with a focus on game creation and designed for kids and adults. The program includes Non commercial content, but the main engine and large majority of the media can be used commercially. The Exciting Adventures of Master Chef Ogro was created using this engine by High School students.
* TinyMUCK — MU* engine
* TinyMUD — MU* engine
* Unity — An open-ended 3D game/interactive software engine for web, Windows, and Mac OS X. Upgrading to paid licenses can additionally enable support for the iPhone, Android and Nintendo Wii.
* World Builder — A classic Mac OS game engine.
* Wintermute Engine — A runtime and development tools for creating 2D and 2.5D point'n'click adventure games.[6][7]
* RGSS — An engine made by enterbrain to create RPG's using RPG Maker XP. RGSS2 was used for RPG Maker VX.
[edit] Commercial engines
* Alamo — the engine used in Star Wars: Empire at War by Petroglyph Games.
* Aurora Engine — For Role-playing games.
* Bork3D Game Engine — A cross-platform game engine primarily targeting iPhone and iPad.
* BigWorld — Server, client and development tools for the development of MMOG for games that run on Windows, Xbox 360, and PS3.
* BRender — A real-time 3D graphics engine for computer games, simulators and graphic tools.
* C4 Engine — A cross-platform game engine developed by Terathon Software.
* Cafu Engine — A game engine with development tools for creating multiplayer, cross-platform, real-time 3D games and applications.
* Coldstone game engine — An old game creation suite for Macintosh/Windows to create role-playing or adventure-style games.
* Corona SDK — A cross-platform, Lua-based game engine that can build games to the iPhone, iPad, or Android devices from the same set of code.
* CPAL3D — Complete game creation tools with scene editor, IDE and text server.
* CryEngine, CryEngine 2, CryEngine 3, CryEngine 3.5 — The game engine used for the first-person shooter computer game Far Cry. CryEngine 2 is a new generation engine developed by Crytek to create the FPS game Crysis.
* Crystal Tools — Square Enix's proprietary seventh generation game engine.
* DX Studio — Engine and editing suite that allows creation of real-time games and simulations.
* Dunia Engine — Engine (heavily modified version of the CryEngine) made especially for Far Cry 2 by Ubisoft Montreal.
* Earth-4 Engine — The graphics engine used in Earth 2160
* Electron engine — Developed by Obsidian Entertainment for their game Neverwinter Nights 2, based on the Aurora engine.
* Elflight Engine — Cross-platform 3D streaming game engine designed from the ground up for use over the Web. Games can play in a web browser window, in a separate window or full-screen. Java and OpenGL based.
* Enigma Engine — A real-time tactics game engine, used in Blitzkrieg.
* Esperient Creator — A very powerful 3D modeler and engine, used world wide for training, simulation, architecture, and games. Built-in Scripting, C/C++, CScript, or Lisp, Shader Editor, import 50+ 3D formats.
* Euphoria — This is a biomechanical Ragdoll engine by NaturalMotion.
* Freescape (1986) — Incentive Software; One of the first proprietary 3D game engines, used in Driller and 3D Construction Kit.
* Frostbite Engine — Game engine used for the next-gen title Battlefield: Bad Company.
* Gamebryo — Cross-platform game middleware for professional developers, notable for its rapid development.
* GameSalad — A 2D game engine that currently targets the iPhone and a Apple Safari Web-plugin developed by Gendai Games. Has a visual programming interface that allows for rapid development.
* Gamestudio — A 2D and 3D game engine for beginners. Uses the Gamestudio development system and the lite-C programming language.
* Glacier, Glacier2 — Developed by IO Interactive and used for the Hitman series of games. Glacier2 is a new generation engine currently in development for upcoming games.[8]
* Gogii Games Engine - a 2d multi-platform C++ engine supporting PC, Mac, iPhone and iPad. Used in casual games such as the Mortimer Beckett series.
* GrimE — Used in LucasArts graphical adventure games starting with Grim Fandango.
* Hedgehog Engine — Created by the Sonic Team with the capability of rendering high quality graphics at high speed. It was first used in Sonic Unleashed.
* HeroEngine — 3D game engine by Simutronics for building MMOs in a live collaborative environment.
* HPL Engine 2 — Used in Frictional Games survival horror games. Earlier versions are free software.
* id Tech 4 — (Also known as Doom 3 engine) Used by the games Doom 3, Quake 4, Prey and Quake Wars. Will become Open Source with the release of RAGE in September 2011[9].
* id Tech 5 — Currently in development by id Software as engine for their games, Doom 4 and Rage, and as a general purpose engine to be licensed.
* IMUSE — Specifically designed to synchronize music with visual action.
* Infernal Engine — Created by Terminal Reality, provides rendering, physics, sound, AI, and metrics for game development. Used in several games such as Ghostbusters: The Video Game, Mushroom Men: The Spore Wars, Bass Pro Shops: The Strike and Roogoo: Twisted Towers.[10]
* INSANE — Used in LucasArts games.
* Infinity Engine — Allows the creation of isometric computer role-playing games.
* Jade engine — Developed by Ubisoft, originally for Beyond Good & Evil.
* Jedi — A game engine developed by LucasArts for Star Wars: Dark Forces and Outlaws.
* K2 Engine — An engine used in Heroes of Newerth and Savage2 by S2 Games.
* Kaneva Game Platform — A MMOG engine for independent and professional game development.
* Kinetica — A game engine developed by Sony for PlayStation 2.
* Leadwerks Engine — Leadwerks Engine is a 3D engine for rendering, sound, and physics in real-time games and simulations.
* Lemon Engine — Lemon Engine is a modular set of libraries for all aspects of game development across all major platforms.
* Lithtech Jupiter Ex — Developed by Monolith Productions to create the game F.E.A.R.
* LyN engine — Developed by Ubisoft, originally for Rabbids Go Home and Beyond Good & Evil 2.
* Medusa — A C++ 3D game engine developed by Palestar and used in the DarkSpace MMO. It features distributed world simulation, single tool version control and asset realisation, cross-platform compatibility and an integrated client/server network system.
* Monumental Technology Suite – A MMOG platform, including server and client technology and development / live management tools.
* MT Framework — Game engine created by Capcom and used for their games on Xbox 360, PlayStation 3 and PC.
* Multimedia Fusion 2 — A 2D game development system.
* Multiverse Network — An MMOG platform, including server, client, and tools. (Free for development and use — revenue sharing upon commercial deployment).
* Odyssey Engine — Used to create three dimensional computer role-playing games, used in Star Wars: Knights of the Old Republic
* Onyx Engine — Developed by Ubisoft
* Pie in the Sky — Used in two internal games by Pie in the Sky Software and then in the 3D Game Creation System and the games made with it.
* PhyreEngine — A cross platform (PC & PS3) graphics engine from Sony Computer Entertainment.
* Q (game engine) — A fully pluggable, extensible and customisable framework and tools from Qube Software for PC, Wii, PS2, PS3, Xbox, Xbox 360, PSP, iPhone etc. created by the team behind Direct3D.
* RAGE — A game engine created by Rockstar Games to power their upcoming video games on the Xbox 360 and PlayStation 3. Implemented in Grand Theft Auto 4.
* RelentENGINE — A next-generation FPS engine supporting massive destroyable city environments and realistic vehicle control, makes extensive use of shader model 3.
* RenderWare — A 3D API and graphics rendering engine.
* Revolution3D — A 3D graphics engine developed by X-Dream Project.
* RPG Maker VX — A 2D engine to make top-down and isometric-style role-playing games for Windows.
* RPG Maker XP — A 2D engine to make top-down and isometric-style role-playing games for Windows.
* RPG Maker 2003 — A 2D engine to make top-down and isometric-style role-playing games for Windows.
* RPG Maker 95 — A 2D engine to make top-down and isometric-style role-playing games for Windows.
* SAGE engine — Used to create real-time strategy games.
* Scaleform — A vector graphics rendering engine used to display Adobe Flash-based user interfaces, HUDs, and animated textures for games in PC, Mac, Linux, Xbox 360, PlayStation 2, PlayStation Portable, PlayStation 3, and Wii.
* SCUMM engine — Used in LucasArts graphical adventure games.
* Serious Engine — The engine by Croteam used in the epic Serious Sam: The First Encounter and The Second Encounter.
* Shark 3D — A middleware from Spinor for computer, video games and realtime 3D applications.
* ShiVa — A game engine with an authoring tool to produce 3d real time applications for Windows, Mac OS X, Linux, WebOS, Android, and iPhone.
* Silent Storm engine — A turn-based tactics/tactical RPG game engine, used in Silent Storm.
* Sith — A game engine developed by LucasArts for Jedi Knight: Dark Forces II.
* Source engine — A game engine developed by Valve Software for Half-Life 2.The SDK comes with Half Life 2
* Torque Game Engine — A modified version of a 3D computer game engine originally developed by Dynamix for the 2001 FPS Tribes 2.
* Torque Game Engine Advanced – A next-generation 3D game engine support modern GPU hardware and shaders.
* TOSHI — A fourth generation cross platform game engine designed by Blue Tongue Entertainment.
* Truevision3D — A 3D game engine using the DirectX API.
* Unigine — Cross-platform middleware engine.
* Unity — An open-ended 3D game/interactive software engine for web, Windows, Mac OS X, iOS (iPod, iPhone, and iPad), Android, and Nintendo Wii.
* Unreal Engine — A game engine for PC, Xbox 360 and PlayStation 3 .
* Vengeance engine — A video game engine based on the Unreal Engine 2/2.5
* Vicious Engine — Available for Microsoft Windows, Sony PlayStation 2, Microsoft Xbox, and Sony PlayStation Portable
* Virtools — A 3D engine combined with high-level development framework, used for game prototyping and rapid developments. Available for Windows, Macintosh, Xbox, PSP. Can publish standalone or for the 3DVia Web Player browser plugin.
* Vision Engine 8 — A cross-platform game engine, developed by Trinigy. Used in games such as: Arcania: A Gothic Tale, The Settlers 7: Paths to a Kingdom, Dungeon Hero, Cutthroat, and Three Investigators.
* Visual3D.NET Game Engine — All-in-One 3D game engine and toolset, fully written in C#/.NET for Windows A browser player is roadmapped for v1.1.
* WGAF — The game engine developed by Guild Software which powers their MMORPG Vendetta Online.
* X-Ray — The game engine developed by GSC Game World which powers their FPS series, "S.T.A.L.K.E.R".
* XnGine — Developed by Bethesda Softworks, one of the first true 3D engines.
* Zillions of Games — used to develop games that happen on a grid, like chess
Wednesday, November 3, 2010
Develope iPhone Game with Tilemap
Tuesday, November 2, 2010
iPhone Map Kit - Tutorial and Code
Lets create a simple application which displays the address entered by the user on the map within the application. We’ll call it MapApp.
1. First, create a Window based application and name the project as MapApp.
2. Add the MapKit framework to the project. (Control + Click Frameworks folder -> Add -> Existing Frameworks)
3. Create a new view controller class and call it MapViewController. Add a text field, button and map view to it.
#import
#import
@interface MapViewController : UIViewController
IBOutlet UITextField *addressField;
IBOutlet UIButton *goButton;
IBOutlet MKMapView *mapView;
}
@end
4. Now create a xib file named MapView.xib. Set its type to MapViewController and add a UITextField, UIButton and MKMapView to it.
Make sure you set the delegate for the mapView to the controller class.
5. Once the view is ready, update the MapAppDelegate so that the view controller and the view is loaded.
- (void)applicationDidFinishLaunching:(UIApplication *)application {
mapViewController = [[MapViewController alloc] initWithNibName:@"MapView" bundle:nil];
[window addSubview:mapViewController.view];
[window makeKeyAndVisible];
}
6. Now, build the app and check if the view appears correctly or not. We now have the UI ready for entering the address and button for updating the location in the map.
7. Add the class for showing the annotation on the location. Lets call this class as AddressAnnotation.
@interface AddressAnnotation : NSObject
CLLocationCoordinate2D coordinate;
NSString *mTitle;
NSString *mSubTitle;
}
@end
@implementation AddressAnnotation
@synthesize coordinate;
- (NSString *)subtitle{
return @"Sub Title";
}
- (NSString *)title{
return @"Title";
}
-(id)initWithCoordinate:(CLLocationCoordinate2D) c{
coordinate=c;
NSLog(@"%f,%f",c.latitude,c.longitude);
return self;
}
@end
This class will basically show the title and the subtitle of the location on the map.
8. Lets add the function that will be called when the ‘Go’ button is tapped and this will contain the code that will actually display the address location on the map. We call that action as showAddress
- (IBAction) showAddress {
//Hide the keypad
[addressField resignFirstResponder];
MKCoordinateRegion region;
MKCoordinateSpan span;
span.latitudeDelta=0.2;
span.longitudeDelta=0.2;
CLLocationCoordinate2D location = [self addressLocation];
region.span=span;
region.center=location;
if(addAnnotation != nil) {
[mapView removeAnnotation:addAnnotation];
[addAnnotation release];
addAnnotation = nil;
}
addAnnotation = [[AddressAnnotation alloc] initWithCoordinate:location];
[mapView addAnnotation:addAnnotation];
[mapView setRegion:region animated:TRUE];
[mapView regionThatFits:region];
}
9. The map view basically shows the location based on its latitude and longitude but we have the address in the textual form. Therefore we need to convert this into CLLocationCoordinate2D. Note that in the above code we call the function names addressLocation to perform this conversion.
-(CLLocationCoordinate2D) addressLocation {
NSString *urlString = [NSString stringWithFormat:@"http://maps.google.com/maps/geo?q=%@&output=csv",
[addressField.text stringByAddingPercentEscapesUsingEncoding:NSUTF8StringEncoding]];
NSString *locationString = [NSString stringWithContentsOfURL:[NSURL URLWithString:urlString]];
NSArray *listItems = [locationString componentsSeparatedByString:@","];
double latitude = 0.0;
double longitude = 0.0;
if([listItems count] >= 4 && [[listItems objectAtIndex:0] isEqualToString:@"200"]) {
latitude = [[listItems objectAtIndex:2] doubleValue];
longitude = [[listItems objectAtIndex:3] doubleValue];
}
else {
//Show error
}
CLLocationCoordinate2D location;
location.latitude = latitude;
location.longitude = longitude;
return location;
}
The above code reads the address entered in the input box and gets the location from maps.google.com in CSV format. It then gets the latitude and longitude from it. The return code of 200 from google means success.
10. Finally, lets add the delegate function that will display the annotation on the map
- (MKAnnotationView *) mapView:(MKMapView *)mapView viewForAnnotation:(id
MKPinAnnotationView *annView=[[MKPinAnnotationView alloc] initWithAnnotation:annotation reuseIdentifier:@"currentloc"];
annView.pinColor = MKPinAnnotationColorGreen;
annView.animatesDrop=TRUE;
annView.canShowCallout = YES;
annView.calloutOffset = CGPointMake(-5, 5);
return annView;
}
This function basically creates a annotation view (a green color pin) with the annotation that we added earlier to the MapView. Tapping on the green pin will display the title and the sub-title.
So this was a very simple example of how a map can be shown from within an application. Hope this was helpful. Let me know your comments/feedback. Click here to download the code.
UPDATE: All this while Google was not looking for API key in the URL - http://maps.google.com/maps/geo?q=address&output=csv
The URL now needs to change to – http://maps.google.com/maps/geo?q=address&output=csv&key=YourGoogleMapsAPIKey
Monday, November 1, 2010
Touch Detection in cocos2d iphone example
self.isTouchEnabled = YES; in init method of your layer.
The three approaches are:
1.Dumb input management. This isn't dumb in the sense of stupid, but instead is dumb in the sense of a dumb missile that will keep flying straight until it hits something. A more precise description would be ignorant of global state.
While usually not usable as-is in non-demo applications, this approach underpins the other two approaches, and is thus important.
Simply subclass CocosNode and implement any or all of these three methods (you don't have to define them in the interface, they're already defined by a superclass).
-(void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
oint location = [touch locationInView: [touch view]];
[self doWhateverYouWantToDo];
[self doItWithATouch:touch];
}
-(void)touchesMoved:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
oint location = [touch locationInView: [touch view]];
[self doWhateverYouWantToDo];
[self doItWithATouch:touch];
}
-(void)touchesEnded:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
oint location = [touch locationInView: [touch view]];
[self doWhateverYouWantToDo];
[self doItWithATouch:touch];
}
The distinction between the three methods is, touchesBegan is fired when the user first presses their finger on the screen, touchesMoved is fired after the user has pressed their finger on the screen and moves it (but before they pick it up), and touchesEnded is fired when the user picks their finger up.
Using these three methods, you can easily fire actions whenever a Sprite (or any other Cocos2d subclass) is touched. For a simple application that may be sufficient.
2.
Top-down global input management. The next approach allows a very high level of control over handling input, but is prone to creating a monolithic method that handles all input management for your application.
First, it requires that you have references to all Sprite objects that you are interested in detecting input for. You can do that by managing the references manually, or can setup the subclass to track all instances.
You can track instance references fairly easily, modeling after this code:
@interface MySprite : Sprite {}
+(NSMutableArray *)allMySprites;
+(void)track: (MySprite *)aSprite;
+(void)untrack: (MySprite *)aSprite;
@end
And the implementation:
@implementation MySprite
static NSMutableArray * allMySprites = nil;
+(NSMutableArray *)allMySprites {
@synchronized(allMySprites) {
if (allMySprites == nil)
allMySprites = [[NSMutableArray alloc] init];
return allMySprites;
}
return nil;
}
+(void)track: (MySprite *)aSprite {
@synchronized(allMySprites) {
[[MySprite allMySprites] addObject:aSprite];
}
}
+(void)untrack: (MySprite *)aSprite {
@synchronized(allMySprites) {
[[MySprite allMySprites] removeObject:aSprite];
}
}
-(id)init {
self = [super init];
if (self) [MySprite track:self];
return self;
}
-(void)dealloc {
[MySprite untrack:self];
[super dealloc];
}
So, maybe this is a bit of a pain to set up, but it can be pretty useful in other situations as well (like discovering which instances of MySprite are within a certain distance of a point).
Then, you implement the three methods from above in your Scene object, and use it to handle and route clicks.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: [touch view]];
NSArray * mySprites = [MySprite allMySprites];
NSUInteger i, count = [mySprites count];
for (i = 0; i < count; i++) {
MySprite * obj = (MySprite *)[mySprites objectAtIndex:i];
if (CGRectContainsPoint([obj rect], location)) {
// code here is only executed if obj has been touched
}
}
}
The advantage of this approach is that you have an extremely granular level of control over input management. If you only wanted to perform actions on touches that touch two instances of MySprite, you could do that. Or you could only perform actions when a certain global condition is activated, and so on. This approach lets you make decisions at the point in your application that has the most information.
But it can get unwieldy depending on the type of logic you want to implement for your user input management. To help control that, I usually roll a simple system for user input modes.
The implementation depends on your specific app, but you'd start by subclassing NSObject into a UIMode object.
@interface UIMode : NSObject {}
-(id)init;
-(void)setupWithObject: (id)anObject;
-(void)tearDown: (UIMode *)nextMode;
-(void)tick: (ccTime)dt;
-(BOOL)touchBeganAt: (CGPoint)aPoint;
-(BOOL)touchMovedAt: (CGPoint)aPoint;
-(BOOL)touchEndedAt: (CGPoint)aPoint;
@end
The implementation of all those classes for UIMode should be inert stubs that can then be overridden in subclasses as appropriate. My system is to have the touch?At methods return YES if they decide to handle a specific touch, and otherwise return NO. This lets user interface modes implement custom logic, or to let a touch pass on to your default touch handling.
Next update the interface for your subclass of Scene like this:
@interface MyScene : Scene {
UIMode * currentMode;
}
-(UIMode *)currentMode;
-(void)setCurrentMode: (UIMode)aMode;
Then, in your implementation you'd add some code along these lines:
-(UIMode *)currentMode {
return currentMode;
}
-(void)setCurrentMode: (UIMode *)aMode {
if (currentMode != nil) {
// this tearDown method is part of the imagined
// UIMode class, and lets a UIMode disable itself
// with knowledge of the subsequent UIMode for proper
// transitions between modes
[currentMode tearDown:aMode];
[currentMode release];
}
currentMode = [aMode retain];
}
Finally, you'd need to update the touchesBegan:withEvent method to query the UIMode whether it wants to handle each specific click.
- (void)touchesBegan:(NSSet *)touches withEvent:(UIEvent *)event {
UITouch *touch = [touches anyObject];
CGPoint location = [touch locationInView: [touch view]];
// forward the specified location to the UIMode, and abort
// standard click handling if the UIMode decides to handle
// the click
UIMode * uim = [self currentMode];
if (uim != nil && [uim touchBeganAt:location]==YES) return;
NSArray * mySprites = [MySprite allMySprites];
NSUInteger i, count = [mySprites count];
for (i = 0; i < count; i++) {
MySprite * obj = (MySprite *)[mySprites objectAtIndex:i];
if (CGRectContainsPoint([obj rect], location)) {
// code here is only executed if obj has been touched
}
}
}
This is the approach I prefer, because it is fairly simple, and allows an extremely high amount of flexibility. I realize that I dumped a ton of code here, and apologize. Hopefully you can still find the thread of thought intertwined into the jumble.
3.
Bottom-up global input management. I won't provide much code for this approach, as it isn't one that I use, but it's a compromise between the first and second approaches.
For each instance of some MySprite class, override the touchesBegan:withEvent: (and moved and ended variants as well, if you want them) method, and then notify a global object about the touch occuring.
It would look something like this:
-(void)touchesBegan: (NSSet *)touches withEvent: UIEvent *)event {
CurrentScene * s = [self currentScene]; // Not a real method.
[s mySpriteTouched:self];
}
Of course, this means you'd need to pass a reference to the current scene to each instance of MySprite, or you can use a singleton to simplify.
static CurrentScene *sharedScene = nil;
+(CurrentScene *)sharedScene {
@synchronized(self) {
if (sharedScene = nil)
[[self alloc] init];
}
}
return sharedGame;
}
+(void)releaseSharedScene {
@synchronized(self) {
if (sharedScene != nil) [sharedScene release];
sharedScene = nil;
}
}
+(id)allocWithZone: (NSZone *)zone {
@synchronized(self) {
if (sharedScene = nil) {
sharedScene = [super allocWithZone:zone];
return sharedScene;
}
}
return nil;
}
-(id)retain {
return self;
}
-(unsigned)retaiCount {
return UINT_MAX;
}
-(void)release {}
-(id)autorelease {
return self;
}
The code is a bit of a clusterfuck, in my humble opinion, but it is still quite convenient, as it allows us to convert the touchesBegan:withEvent method to this:
-(void)touchesBegan: (NSSet *)touches withEvent: UIEvent *)event {
[[CurrentScene sharedScene] mySpriteTouched:self];
}
And we don't have to explicitly pass the reference to the CurrentScene instance to each instance of MySprite. Objective-C has a lot of these painful pieces of code that are rather annoying to implement, but can save a lot of effort once they are implemented. My advice is to use them, early and infrequently.
Well, there you have it, three approaches to handling touch detection for Cocos2d iPhone, presented in a confusing and at most halfway organized article.
Monday, October 25, 2010
Display UITextField in Cocos2d Game
txtName = [[UITextField alloc] initWithFrame: CGRectMake(160, 240, 200, 32)];
I add it to the subview like so:
[[[CCDirector sharedDirector] openGLView] addSubview: txtName];
The problem I'm having is that, when I change the orientation to PortraitUpsideDown with this command:
[[CCDirector sharedDirector] setDeviceOrientation:CCDeviceOrientationPortraitUpsideDown];
the UITextField and the keyboard don't change with the new orientation. When I click on the text box, the keyboard appears upside down and on the top of the screen.
I know I can change the rotation of the UITextField using CGAffineTransformMakeRotation, but how do I make the keyboard appear upright and at the bottom of the screen when in PortraitUpsideDown orientation?
When I detect an orientation change, in addition to setting the Director's device orientation like so:
[[CCDirector sharedDirector] setDeviceOrientation:CCDeviceOrientationPortraitUpsideDown];
I also needed to set the UIApplication's StatusBarOrientation as well
[[UIApplication sharedApplication] setStatusBarOrientation:UIInterfaceOrientationPortraitUpsideDown];
This caused the keyboard to orient and position itself correctly.
Hope this helps someone!
Friday, October 22, 2010
Mac App Store Coming Soon - Port your cocs2d game for Mac
The Mac App Store will be similar to the App Store, but instead of distributing iOS applications, it will distribute Mac applications.
This is big news for cocos2d users, since cocos2d already supports Mac!!
The guidelines and other requirements for the Mac App Store are online:
* Mac App Store Review Guidelines
* App Store Resource Center
So, download the latest cocos2d version, and start porting your cocos2d game for Mac!
Wednesday, October 20, 2010
Cocos2d Game Development Performance Tips
When I started working on the bizarre experiment that eventually became Space Harvest, I initially used Core Animation. This is great, up to a point, but I found I soon hit a performance wall - Core Animation wasn’t flexible enough to handle the types of things I wanted to do, and doesn’t really give you a lot of control over what’s happening under the hood.
Once I switched to Cocos2d, I found that a) it was a lot easier to get something that ran relatively smoothly, and b) because it’s designed for games, it provides loads of functionality to make your life easier.
This post is about some of the things I learned about improving performance when using Cocos2d. A lot of the time, I found out about this stuff by doing it the wrong way to start with. Though I doubt they’ll be anything here for experienced game developers, I hope my tips may be useful to anyone who (like me) is quite new to OpenGL and game development.
Profiling with Instruments
As with any software, before you start doing any work to improve performance, you should first look for where the bottlenecks are.
Run your project with the CPU sampler instrument to see where your program is spending its time. Though you can sometimes get valuable results by profiling on the simulator, you should try to do as much profiling on the device as possible. You probably already know the device usually runs software much slower than the simulator, but this is even more obvious with CPU/GPU intensive software like games. The device is MUCH slower at certain things than the simulator, so the only real way of seeing where the actual performance bottlenecks lie is to run on the device.
I find the most helpful way to start looking for potential performance bottlenecks is:
* Change Sample Perspective to ‘Running sample times’
* Change Active Thread to ‘Main Thread’. 99% of the time, the performance intensive parts of your game will be single threaded (on a single core device like the iPhone / iPod touch, it wouldn’t really make sense to do it any other way)
* Turn off invert call tree, and expand the list until you find [CCDirector mainLoop]
They’ll probably be two main places where your application will be spending its time - drawing, and running your main loop for game logic.
Always look for the low hanging fruit - those parts of your code that are taking up lots of time, and can be easily optimised. If your game is spending any significant amount of time in [CCScheduler tick], you should start your optimisation work there. Optimising your game logic will usually be less painful than optimising your drawing.
Testing on different devices
For most iPhone OS applications, the differences in hardware between devices is not particularly significant. Users might notice that things are a little more snappy on newer devices like the 3GS newer iPod Touches, but generally, it isn’t a big deal.
For games, the performance gap between the older devices and the newer ones is HUGE.
In Space Harvest, each level has a loading screen because it can take a while to pre-load the textures and load the maps that make up the game world. I thought it would be nice to include tips on how to play on the loading screen so users weren’t actually staring at nothing while each level loaded.
I performed the majority of my testing on my 3G iPhone. During the last couple of weeks before I released the first version of Space Harvest, I got to test Space Harvest on a 3GS iPhone. The first thing I noticed was that I could no longer read the tips. What might have been a ten second wait for loading on the 3G became a barely noticeable blue flash on screen before the level started on the 3GS. I ended up introducing a ‘tap to continue’ message, just to let people with newer devices see the playing tips.
So, the newer devices are faster - not just a bit faster, but a lot faster. A lot faster at loading textures, a lot faster at drawing things with OpenGL, a lot faster at just about everything. If you aren’t testing on older devices, how will you know if your game is even playable on older devices?
Ignoring the iPad for now, in performance terms, there are three classes of iPhone OS device:
* Slowest devices: iPhone / iPhone 3G / 1st gen iPod Touch(i)
* 2nd gen iPod Touch, 3rd gen iPod touch 8GB
* Fastest devices: iPhone 3GS, 32GB / 64GB 3rd gen iPod Touch
A couple of other random notes about the differences between older and newer devices:
* Older devices have half the RAM of newer devices (128MB vs 256MB), which means the likelihood of low memory warnings is much greater. This makes it even more important to manage your memory carefully.
* Older devices only support textures up to a maximum of 1024x1024 pixels. You should avoid textures larger than this size if you want your game to work on older phones and iPods.
Textures and Texture Atlases
Loading textures on the device is rather slow. Because of this, you should try to load textures before the user starts playing a level that may need them, otherwise you might get hiccups in frame rate.
When you create a sprite in Cocos2d, you’ll normally pass in a reference to the texture you want it to use, eg:
CCTexture2D *texture =
[[CCTextureCache sharedTextureCache] addImage:@"mytexture.png"];
CCSprite *sprite = [CCSprite spriteWithTexture:texture];
This helps keep things nice and simple. However, if you’re loading all your textures in advance, you may notice that loading starts to slow down on older devices - the more textures you load, the slower it gets.
Speeding up loading times
At one point during the development of Space Harvest, loading textures was taking more than 20 seconds. Just for loading textures. Once textures were loaded, the user would still have to wait for the map to load. Ouch!
The reason for this was that I hadn’t paid attention to what everyone was saying, and I wasn’t using texture atlases.
A texture atlas is basically a large image that contains lots of smaller textures. On the right is an example of one of the texture atlases used in Space Harvest.
Using a texture atlas can help speed up drawing significantly (more on this below), but equally importantly, it helps speed up texture loading, and helps reduce the amount of memory your textures will use once loaded.
Space Harvest uses lots of different sprites, some with as many as 50 frames of animation. By combining 50 textures into one texture, you’ll cut loading texture loading times significantly.
The newest versions of Cocos2d make this pretty easy. Here, we’re creating a sprite using a rectangular portion (specified in pixels) of a larger texture:
CCTexture2D *texture =
[[CCTextureCache sharedTextureCache] addImage:@"myatlastexture.png"];
CCSprite *sprite =
[CCSprite spriteWithTexture:texture rect:CGRectMake(0,0,32,32)];
Reducing memory usage
I mentioned you can also save memory by using atlas textures. This is because textures in Open GL ES must have a width and height that are a power of two, eg 64x128, 256x1024, 512x512 etc.
Cocos2d is smart enough to resize your images for you when it comes to loading textures, but look at all the space we’re wasting! In the above example, our 144x93 texture became a 256x128 texture once it got loaded into memory. This means we’ve ended up with more wasted space in our texture than used space!
For a single texture, this won’t be a big deal, but what happens when we load 50 textures like this? 50 times as much waste. Again, using texture atlases is a great way to solve this problem - you can easily combine lots of textures that don’t have a power of two width and height into a single texture that does.
Flipping textures
Another tip to cut down on memory usage is to use flipped textures. If your sprite looks the same when drawn facing in the opposite direction (but just horizontally or vertically flipped), you can use the same texture, and do the flipping in code by setting the flipX / flipY properties of your sprite.
Pixel formats
Cocos2d provides several different pixel formats for loading your textures. These are quite distinct from the format you use to save your texture images.
When saving a PNG image in Photoshop, I can decide whether I want to save it as an 8-bit indexed colour image, or a 24-bit truecolor image, or a 32-bit image that includes an alpha channel. The format I choose will depend on the nature of the image. 8-bit PNG is best suited for images with few colours that don’t use partial transparency. 32-bit images can have many colours and include partial transparency, though the file size will often be significantly larger.
When loading textures, the original format of the image doesn’t really matter that much - what’s important is the pixel format we use for loading. As with saving images for the web, it’s basically a question of balancing image quality and size.
kTexture2DPixelFormat_RGBA8888 is the default pixel format for textures, and provides the best image quality. It uses 8 bits for each colour (Red, Green and Blue), plus 8 bits for the alpha channel, for a total of 32 bits per pixel.
kTexture2DPixelFormat_RGBA4444 will use only 4 bits for each colour, plus 4 bits for the alpha channel, for a total of 16 bits per pixel. A texture stored in memory in this format will use half the size of one loaded with kTexture2DPixelFormat_RGBA8888.
For larger textures like atlases, this difference is very important. A 1024x1024 texture will use 4MB of texture memory when loaded with RGBA8888, but only 2MB of texture memory when loaded with RGBA4444!
Try to avoid using RGBA8888 unless you absolutely need the best possible quality for a particular texture. A lot of the time, you won’t even notice the difference between RGBA8888 and one of the other pixel formats. Gradients are a good example of where RGBA8888 is most useful:
To cut down on the number of times you have to set the pixel format, set it to the format you’re likely to use most often when your game starts:
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA4444];
Then, whenever you need to change the pixel format for loading a particular texture, make sure you change it back afterwards:
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA8888];
CCTexture2D *texture =
[[CCTextureCache sharedTextureCache] addImage:@"buttonsatlas.png"];
[texture setAliasTexParameters];
[CCTexture2D setDefaultAlphaPixelFormat:kTexture2DPixelFormat_RGBA4444];
Considering the pixel format you’ll end up using is important when planning your texture atlases. As each loaded atlas can only have one pixel format, you should try to ensure that you keep textures that require RGBA8888 together the same atlases, so you only have to use that pixel format for those textures.
A guide to the different pixel formats Cocos2d uses for textures, including hints on when to use each, appears here.
PVRTC
PVRTC is a special texture format you can use on iPhone OS devices. PVRTC textures take up less texture memory than regular textures of the same size, and can be drawn faster.
You can generate PVRTC textures from regular images using the texturetool program that comes with the Developer tools.
While PVRTC textures have some advantages, they also have one big disadvantage - compression artifacts.
The above image is probably showing the worst-case scenario - the kind of images for which PVRTC is least well suited. Space Harvest is a 2D game, so you’re basically always looking at the textures head on. Additionally, because the visual style of Space Harvest relies on crisp, non-antialiased graphics, artifacts are that much more visible.
In fact, PVRTC textures can be very useful, even in 2D games. For more detailed, anti-aliased sprites, or photorealistic textures, you might not even notice the difference. Space Harvest uses PVRTC for background images. But wherever clarity is very important, you should probably avoid them.
CCSpriteSheet
CCSpriteSheet is one way to unlock big performance improvements when drawing. Each sprite sheet has a texture atlas. When we create a sprite that uses that texture atlas, we can chose to attach our sprite to the sprite sheet by adding it as a child of the sprite sheet.
Why do this? Well, our sprite sheet will take over the drawing of our sprite. Rather than drawing each sprite individually, it will draw all sprites attached to that sprite sheet at once. This matters because one of the best ways to improve performance in an Open GL application is to cut down on the number of GL calls your code makes. Cocos2d handles most of the Open GL stuff for you, but it is helpful to have an understanding of what it’s doing behind the scenes.
Here is part of the draw method for CCSprite:
BOOL newBlend = NO;
if( blendFunc_.src != CC_BLEND_SRC || blendFunc_.dst != CC_BLEND_DST ) {
newBlend = YES;
glBlendFunc( blendFunc_.src, blendFunc_.dst );
}
#define kQuadSize sizeof(quad_.bl)
glBindTexture(GL_TEXTURE_2D, [texture_ name]);
int offset = (int)&quad_;
// vertex
int diff = offsetof( ccV3F_C4B_T2F, vertices);
glVertexPointer(3, GL_FLOAT, kQuadSize, (void*) (offset + diff) );
// color
diff = offsetof( ccV3F_C4B_T2F, colors);
glColorPointer(4, GL_UNSIGNED_BYTE, kQuadSize, (void*)(offset + diff));
// tex coords
diff = offsetof( ccV3F_C4B_T2F, texCoords);
glTexCoordPointer(2, GL_FLOAT, kQuadSize, (void*)(offset + diff));
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
if( newBlend )
glBlendFunc(CC_BLEND_SRC, CC_BLEND_DST);
(Emphasis mine, obviously)
Look at the number of calls to functions that begin with ‘gl’ - 7 calls in the worst case. This code will run every time we draw a sprite that isn’t attached to a sprite sheet. If we have 100 sprites, 700 calls, or 7000 gl calls for 1000 sprites.
CCSpriteSheet’s draw method is a bit more complex, and calls a separate method on it’s texture atlas to do the drawing. For brevity, I won’t repeat it here, but it looks like it makes around 10 gl calls. But crucially, it will only make those 10 calls to draw ALL of the sprites attached to our sprite sheet - 100 sprites will be 10 calls, even with 1000 sprites, it’ll still be 10 calls. In practice, this gives you a huge boost in drawing performance.
A note about depth sorting and CCSpriteSheet
If you need to depth-sort sprites from one atlas with sprites from another, things get a bit more complex.
By default, Cocos2d renders graphics using the Painter’s Algorithm. That means that objects are drawn in the order in which they appear in their parent node’s children array. The end result is that objects at the back are drawn first, objects that appear on top of everything else are drawn last.
This presents a problem if you need some objects attached to a particular sprite sheet to be drawn in front of those from another sprite sheet, but behind others from that same sprite sheet. By default, once the sprites from the first atlas have been drawn, the sprites from the second atlas will draw over the top of them, regardless of their z position relative to their siblings.
The way around this is to use CCNode’s vertexZ property. This gives you access to Open GL’s depth buffer, which allows you to draw your sprites in any order you like, and still allow them to be correctly depth sorted in relation to one another.
The biggest downside to this approach is that it can be difficult to get semi-transparent sprites to render properly, as the object directly behind them may not have been drawn when they come to be rendered. Apple’s advice is to draw semi-transparent objects last, which in practice means you’ll need to keep their sprite sheet in a higher position in its parent’s children array.
I didn't use vertexZ for depth sorting in Space Harvest - from the couple of days I spent experimenting with it, I found lots of little side effects that made it tricky to get things working the way I wanted. Regardless, try to use CCSpriteSheet as much as you can.
Pre-render programatically generated textures if you can
Some of the effects in Space Harvest are programatically generated using Core Graphics. Many objects have animations for taking damage and being destroyed - these are created by combining different images using different blending modes:
Originally, I generated textures for destroy animations for all my sprites when starting the game. However, Core Graphics is rather slow on the iPhone platform, so I eventually moved to saving the generated images out, pasting them into to my atlases in Photoshop, and loading them as regular textures in the final version. This cut around 10 seconds(!) off startup time, so if you do have any programatically generated textures that don't change, pre-rendering them is a good way to go.
Tuesday, October 19, 2010
How To Avoid Having Your App Rejected for Core Functionality Issues and Crashing
Also, make sure you thoroughly test your application on iPhone and iPod touch in addition to the iPhone Simulator. A large percentage of applications are rejected due to various types of crashes, including crashes on launch, which would have been found and dealt with if they'd been tested on an actual device. Don't skip that step in the development process.
Tutorials for iPad Development
On this page I have decided to list tutorials specifically created for the iPad along with other resources such as user interface design tips, and graphical resources such as UI prototyping tools and vector kits.
You can expect this page to update with new tutorials and resources so be sure to bookmark it.
Newer resources appear towards the top of the listing.
1. Rapid Prototyping of iPad apps using Keynote – a great guide along with many interactive elements for prototyping (key here being interactive) using Keynote.
2. How To Port An App To the iPad – Covers how to convert your app to handle the iPad screen sizes, specifically autosizing and orientation, and when to use iPad elements.
3. Custom Input View Tutorial – A tutorial on how to create great looking custom input views on the iPad.
4. iPad Multitouch – A great example with code showing how to utilize all 11 available touches on the iPad.
5. UIPopoverController Tutorial - A tutorial on how to use the iPad UIPopoverController element.
6. iPhone To Hybrid – A guide to making apps that work on the different iOS platforms simultaneously with minimal work.
7. UISplitview Tutorial – A tutorial on how to use the specific iPad only UISplitview interface element.
8. Designing For iPad Reality Check – Brilliant in depth to guide to into how the interface of an iPad app should be developed.
9. iPad Application Design – A detailed look at some of the finer details of how to deisgn an iPad user interface.
10. iPad UI Graphic Kits - Several graphic kits for use in tools such as Photoshop and Omnigraffle.
11. Testing Out iPad Code With An iPhone or iPod Touch – Article about testing an iPad interface using the iSimulate iPhone app.
Tutorials for iPad Development
On this page I have decided to list tutorials specifically created for the iPad along with other resources such as user interface design tips, and graphical resources such as UI prototyping tools and vector kits.
You can expect this page to update with new tutorials and resources so be sure to bookmark it.
Newer resources appear towards the top of the listing.
1. Rapid Prototyping of iPad apps using Keynote – a great guide along with many interactive elements for prototyping (key here being interactive) using Keynote.
2. How To Port An App To the iPad – Covers how to convert your app to handle the iPad screen sizes, specifically autosizing and orientation, and when to use iPad elements.
3. Custom Input View Tutorial – A tutorial on how to create great looking custom input views on the iPad.
4. iPad Multitouch – A great example with code showing how to utilize all 11 available touches on the iPad.
5. UIPopoverController Tutorial - A tutorial on how to use the iPad UIPopoverController element.
6. iPhone To Hybrid – A guide to making apps that work on the different iOS platforms simultaneously with minimal work.
7. UISplitview Tutorial – A tutorial on how to use the specific iPad only UISplitview interface element.
8. Designing For iPad Reality Check – Brilliant in depth to guide to into how the interface of an iPad app should be developed.
9. iPad Application Design – A detailed look at some of the finer details of how to deisgn an iPad user interface.
10. iPad UI Graphic Kits - Several graphic kits for use in tools such as Photoshop and Omnigraffle.
11. Testing Out iPad Code With An iPhone or iPod Touch – Article about testing an iPad interface using the iSimulate iPhone app.
Open Source Game Engine Comparison for iPhone
Choosing Your Open Source iPhone Game Engine
Sparrow Framework
The Sparrow Framework is a very lightweight 2D game engine created in Objective-C. In a very short amount of time I was able to understand the framework, and I find it to be very intuitive.
If you’d like to take a look at some actual coding with the Sparrow Framework be sure to check out the Beginners iPhone Action Game Programming Tutorial.
While I have not done much Flash game programming the developers state that the game engine was created with Flash game developers in mind.
The game framework includes all the necessary features you’d require for creating a basic 2D game such as easy animation, and a sound engine.
Cocos2D IPhone
The Cocos2D iPhone game engine is a port of a game engine originally created in Python and converted to iPhone Objective-C. As you can tell from the name, Cocos2D is designed for 2D games, that being said, although the engine is in a 2D world, the engine includes a growing collection of high quality 3D special effects.
Cocos2D has been used in many games on the iPhone app store, you can visit the official site here, where many are listed.
Cocos2D is the first engine to check out, while many may be turned off by the engine not supporting a 3d world, if you look at most of the top iPhone games the gameplay is 2D, in fact the iPhone’s touch screen controls can make it difficult to operate in a 3D world.
Also included is support for the in-game Chipmunk engine, and the latest version of Cocos also includes an OpenAL based sound engine.
The engine provides more examples than any of the other engines out there because of the large community. Overall I’d say the engine is as easy to use as any engine that does not have an environment editor.
Uses the LGPL license.
Sio2Engine
The SIO2 game engine is an excellent 3D game engine written in C. There is a free oepn source version, and a an indie version for $49. The free eidtion requires you to show a splash screen at the start of your game illustrating your use of the engine. This in my opinion is extremely fair considering the quality of the engine.
The game engine uses blender in it’s toolchain for scene and model creation. If you haven’t used Blender, it is a sophisticated open source 3D modeling program. In my opinion this is the only thing I don’t really like about sio2, while some love it, I can’t stand using blender as I’ve found it can’t compare to the top commercial modeling programs. Fortunately there are many blender plugins that allow you to import a wide variety of modeling formats.
SIO2 comes with an excellent set of tutorials, and provides support for sophisticated features such as skeletal animation, and soft-body physics which are explained in the tutorials.
I’ve found the performance of the latest version of the SIO2 game engine, version 1.4 to provide significantly better performance than previous versions. If you haven’t checked out SIO2 in awhile then I suggest you check it out again.
I recommend SIO2 to those who insist on a 3D world and thus can’t use Cocos.
Oolong Engine
The Oolong game engine is a 3D engine written in C++, and provides excellent performance. The downside of the Oolong engine is that it is difficult to use for those that are not familiar with OpenGL ES.
Oolong provides support for a wide variety of features, and very good performance, as I said my only problem with Oolong is that it is difficult to use. This is a low-level engine designed for programmers so if you’re just getting into game development I would stay away.
You will find the latest version on google code, there is very little documentation for Oolong, but the community is very active, and you can get answers to many of your questions there.
I would recommend Oolong to those looking to create their own game engine looking for something to start with.
Uses the MIT license.
Irrlicht Engine
I mention Irrlicht here only because I received a message from someone stating that it was available on the iPhone. I know that it has been used in the creation of apps already available on the iPhone.
The Irrlicht game engine is a 3D game engine written in C++.
While there is no official port available on the Irrlicht website for the iPhone with some tinkering I was able to get the OpenGL ES version running on the iPhone — somewhat. You will find the OpenGL ES version hidden away in the repository.
Irrlicht is an excellent open source engine that has support for an extremely wide variety of file formats, and has the best support for the “classic” BSP format that I’ve seen in an open source game engine. There are also numerous other tools that have been created for the engine.
All this being said, I can’t recommend Irrlicht because there is no official port, and if you check out the forums there really is no one willing to provide help to those looking to get it running on the iPhone although some have created apps running on the iPhone.
The Irrlicht engine uses the Zlib license.
Summary
The Sparrow Framework makes an excellent first choice for those developing a 2D iPhone game. Cocos2D is the most popular, and has the most support but is less intuitive. You will learn Objective-C while using the engine, and the engine has been proven in a wide variety of games.
For 3D games my choice is SIO2, although I’m not a fan of blender this does make it more accessible than the other proven 3D iPhone game engines.
Sunday, October 17, 2010
Integrating AdMob with Cocos2D-iPhone Applications
AdMob is a mobile advertising network that serves ads to be displayed on mobile devices. Ads can be served via native applications or within a browser and are not limited to just smart phones. AdMob offers plenty of metrics to track the quantity of served ads, geographic region, which cellphone operator, and device type. This tutorial will provide the guidance needed to include AdMob served ads within your Cocos2D-iPhone application. This example is based upon v.82 of the Cocos2D-iPhone framework.
Your first step is to register with AdMob to get a publisher ID and input information regarding your application. Registration and eventual access to the SDK is free.
Once registered and logged in, you’re presented with a dashboard where you can track your applications. As shown in the following, I already have MyGame.
Let’s add a new site by selecting the +Add Site/App button. As shown in the following image, there are various choices available from which to publish ads. Go ahead and select iPhone App to add the application details.
Now Enter the fields in your application from admob panel. Don’t worry about the iTunes link as you can come back to add it later if the application hasn’t been published yet. You can also change the theme color, but this too can be changed later by entering your own color code values.
After selecting continue at the bottom, we can then download the SDK. Interestingly, the SDK’s sample files will come pre-populated with a newly created publisher ID for your specific application. We’ll get to that in a moment, but for now keep in mind that this ID will be unique for your application. It can also be a source of frustration as I’ve seen with other users who forget to input their ID and wonder why their application isn’t serving ads.
After selecting the download SDK button and when returning to the Sites & Apps page we see the new application has been added.
Select the setup link for your application to then view the application’s details. Again we see the particular publisher ID as well as some other features that we can control such as the types of ads we can serve. There are many ad categories, and you could find some surprising content offered up by your site.
Selecting the Category/Type Settings tab shows the various ad categories available as well as presents an ability to turn them off. Keep in mind that with so many devices offering up ads and a limited inventory of available ads, it’s not likely that you’ll consistently have a 100% fill rate. Meaning, there may be just some times that an ad may not be presented due to the lack of inventory. As a result, turning off ad categories will most likely reduce your fill rate.
Now that we have a feel for the AdMob dashboard and the various configuration capabilities, it’s time to start integrating their SDK with our application. Let’s untar the downloaded file and review the files. The following image shows the provided files and sample XCode project. Open the XCode project and take a look at the AdViewController.m file.
Upon closer inspection, we’ll see that our previously assigned Publisher ID, done at the time of the application’s creation, is pre-populated in the publisher ID string:
We’ll use this same value in a bit within our application. The AdMob ads are displayed in 320×48 UIViews and can essentially be placed anywhere. There are choices for other sizes, especially for the iPad, but for this exercise we’re going to stick with the smaller ads.
The AdMob provided README within the SDK provides the necessary details for project integration. Specifically, we start by adding the AdMob library code and headers to the XCode project. These files are contained in the AdMob subdirectory and consist of the following:
* AdMobDelegateProtocol.h
* AdMobView.h
* libAdMob.a
Quite frankly, I just dragged and dropped the AdMob folder from the sample projects into my project.
We also need to ensure the following frameworks have been added to the project.
* CoreLocation
* CoreGraphics
* QuartzCore
* AddressBook
* AudioToolbox
* MediaPlayer
Now we’re going to add the code within the class that will display the ad. For my particular case, I’m displaying ads in the main menu of Balloon-Boy as shown in the earlier image.
Add the following statements to the class header file that will display the ads. In my case I added them to my MenuScene.h.
view source
print?
1 #import "AdMobDelegateProtocol.h"
2 #import "AdMobInterstitialDelegateProtocol.h"
3 #import "AdMobInterstitialAd.h"
4 #define AD_REFRESH_PERIOD 60.0 // display fresh ads once per minute
Then we declare the following variables within the MenuScene.h.
view source
print?
1 AdMobView *adMobAd;
2 NSTimer *refreshTimer; // timer to get fresh ads
3 UIViewController *viewController;
We’ll now turn our attention to the main class file (MenuScene.m). Include the following methods within this file where the ad will be displayed. The provided comments are fairly self explanatory, and should give you an idea of what’s going on.
view source
print?
01 - (void)didReceiveAd:(AdMobView *)adView {
02 // put the ad at the top middle of the screen in landscape mode
03 adMobAd.frame = CGRectMake(0, 432, 320, 48);
04 CGAffineTransform makeLandscape = CGAffineTransformMakeRotation(M_PI * 0.5f);
05 makeLandscape = CGAffineTransformTranslate(makeLandscape, -216, -134);//centers the ad in landscape mode
06 adMobAd.transform = makeLandscape;
07 [viewController.view addSubview:adMobAd];
08 }
09
10 // Sent when an ad request failed to load an ad
11 - (void)didFailToReceiveAd:(AdMobView *)adView {
12 NSLog(@"AdMob: Did fail to receive ad in AdViewController");
13 [adMobAd release];
14 adMobAd = nil;
15 }
16
17 - (void)onEnter {
18 viewController = [[UIViewController alloc] init];
19 viewController.view = [[CCDirector sharedDirector] openGLView];
20 adMobAd = [AdMobView requestAdOfSize:ADMOB_SIZE_320x48 withDelegate:self];
21 [adMobAd retain]; // this will be released when it loads (or fails to load)
22 [super onEnter];
23 }
24
25 - (void)onExit {
26 [adMobAd removeFromSuperview];
27 [adMobAd release];
28 [super onExit];
29 }
30
31 // Request a new ad. If a new ad is successfully loaded, it will be animated into location.
32 - (void)refreshAd:(NSTimer *)timer {
33 [adMobAd requestFreshAd];
34 }
35
36 // AdMobDelegate methods
37 - (NSString *)publisherIdForAd: (AdmobView *)adView {
38 return @"a14b19f75680944"; // this is your publisher ID
39 }
40
41 - (UIViewController *)currentViewControllerForAd:(AdMobView *)adView {
42 return viewController;
43 }
44
45 - (UIColor *)adBackgroundColor {
46 return [UIColor colorWithRed:0 green:0.749 blue:1 alpha:1]; // this should be prefilled; if not, provide a UIColor
47 }
48
49 - (UIColor *)primaryTextColor {
50 return [UIColor colorWithRed:0 green:0 blue:0 alpha:1]; // this should be prefilled; if not, provide a UIColor
51 }
52
53 - (UIColor *)secondaryTextColor {
54 return [UIColor colorWithRed:0 green:0 blue:0 alpha:1]; // this should be prefilled; if not, provide a UIColor
55 }
56
57 - (BOOL)mayAskForLocation {
58 return NO; // this should be prefilled; if not, see AdMobProtocolDelegate.h for instructions
59 }
If all went as planned, this newly added code should now display a rotated ad at the top middle of your view in landscape mode.
As an aside, some people have experienced an error when attempting to integrate the AdMob code within their project. The error message speaks to a duplication issue and is as follows:
ld: duplicate symbol .objc_category_name_NSCharacterSet_NSCharacterSet_Extensions in /Users/Home/Documents/Xcode/MyCrazyProject/AdMob/libAdMobDeviceNoThumb.a(NSCharacterSet_Extensions.o) and /Users/Home/Documents/Xcode/MyCrazyProject/build/Distribution-iphoneos/libcocos2d libraries.a(NSCharacterSet_Extensions.o)
Online discussion has attributed this error to both AdMob and Cocos2D referencing the same library, which in this case is TouchJSON. As a result, some people have indicated that simply deleting TouchJSON directory from the Cocos2D package has solved the problem. I have not experienced this error while running under v.82 of Cocos2D so can not definitively say why it shows up every once in a while.
Additionally, other users have also experienced problems when using CocosLive. Since now they needed the TouchJSON library, they were forced to rename all of the classes in it so as to avoid conflict with AdMob.
AdMob Metrics
Now’s probably a good time to also explain some of the metrics and terms regarding online advertising. The following image shows an example report for what was my newly released Balloon-Boy application. Upon close inspection, you can see it shows the break down for activity by country. As discussed earlier about available ad inventory, we can see that the application had made nearly 2,900 ad requests. But it was fulfilled for about 2700 of them (impressions). Out of all of those impressions there were 41 clicks. Interestingly, the Asian market had a much higher click through rate as well as provided a much higher payout as measured by the eCPM. The CPM acronym is derived from the latin version of the phrase “cost per mille” where “mille” in English means “thousand.”
The eCPM value is calculated by the following:
Total Earnings/Impressions * 1,000 = eCPM
The idea is to easily allow for comparison of values to see how effective a campaign is. For us on the receiving end, it’s easy to see that the payout in Asia was much higher. The advertisers looked to be more willing to pay more per click as well as the ads were well targeted given the high click through rate.
This concludes the Integrating AdMob with Cocos2D-iPhone Applications tutorial. I hope that the knowledge will be useful.
Friday, October 15, 2010
OpenFeint Integration with cocos2d
OpenFeint is a service that enables your iPhone/iPod Touch application the ability to provide online score tracking. The service is free and can easily be incorporated into your applications. This tutorial will cover the integration of OpenFeint 2.4.3 with the Cocos2D-iPhone framework. Before getting started, it is assumed that you already have familiarity with:
* Objective-C
* XCode
* Cocos2D-iPhone framework
The expectation is that you already have a working Cocos2D based application to which you’d like to enable leaderboard functionality. If you’re new to Cocos2D-iPhone, then you should visit http://www.cocos2d-iphone.org to review the available documentation and download the latest framework. You do not have to be a seasoned veteran to use Cocos2D, and the framework makes it very easy to create high quality games. At the time of this writing, version 8.2 of the Cocos2D-iPhone framework was the most stable version, and hence it provides the basis for this tutorial.
There are several features available within OpenFeint in terms of score and event management. Beyond just tracking high scores, the developer can establish goals such that the player can earn achievements or even initiate online challenges. For this tutorial, the focus will be to simply enable an online leaderboard. The player will be able to track their own progress as well as compare their scores against other players via a global leaderboard.
There is also the ability to enable a chat function between players. While I have not tried it, I have been told by other developers that by enabling this feature you’re required to assign a mature rating. The mentality is that unrestricted chat within a game could expose minors to unscrupulous users and content.
Access to the OpenFeint code is done through the OpenFeint developer portal, and there is no charge to enroll in the developer program. Visit http://www.openfeint.com/ to sign-up for access.
Once authenticated with the portal, you’ll have access to the developer home page and will be able to download the OpenFeint SDK.
When enabling an application to use the OpenFeint service, you register it within the developer portal. Selecting the green plus button at the top of the page allows you to start this process. In this example we’ll add a test application, My Crazy App, so that you can see the various screens. Yet, the specific code examples shown later on will reflect the integration process with one of my OpenFeint enabled games.
After selecting submit, the application is given a Client Application ID, Product Key, and Product Secret. As we’ll later see, these values are needed when initializing OpenFeint within our application. Keep them handy as we’ll need them when it comes time to add the respective OpenFeint code to the application.
Create your leaderboards from Leaderboard link in OpenFeint Left side menu.
Now that we’ve enrolled in the OpenFeint program, downloaded the SDK, and created an entry for our application, it is time to start the integrating process within our Cocos2D-iPhone application.
There are some preliminary steps before inclusion of code within your class files. If you’re upgrading from a prior version, the process is relatively straight forward. You simply delete the old OpenFeint folder from your XCode project and need to ensure that you add some additional frameworks, which will be detailed in the following section. Note: it is also important that you go into your project’s directory and also delete both the OpenFeint and build directories.
Let’s review the the OpenFeint README.txt for information on integration of the application.
If you’ve downloaded and extracted the OpenFeint SDK, you’re ready to begin the process. For step 4 of the README file, we’re to drop the unzipped OpenFeint folder into our XCode project. And since the game is landscape only, we’ll remove the Portrait folder found under the Resources directory as suggested for step 5. The following shows the inclusion of the OpenFeint folder within our XCode project.
Continuing on with Step 6, we right click on our project icon within XCode and select Get Info. In looking at the build tab, we added the Linker Flags the value -ObjC as well as selected ‘Call C++ Default Ctors/Dtors in Objective-C’. These are shown in the following screen capture.
For step 7, the README.txt notes the following frameworks that must be included within the project. And if you’re like me, I can never remember the all too convenient path so here it is for reference. Right click on frameworks and add existing framework. Navigate to:
/Developer/Platforms/iPhoneSimulator.platform/Developer/SDKs/iPhoneSimulatorx.y.z.sdk/System/Library/Frameworks/
Here are the frameworks you’ll need to ensure you’ve included in your XCode project.
* Foundation
* UIKit
* CoreGraphics
* QuartzCore
* Security
* SystemConfiguration
* libsql3.0.dylib (located in (iPhoneSDK Folder)/usr/lib/)
* CFNetwork
* CoreLocation
* MapKit (if building with SDK 3.0 or newer)
And here we see the frameworks that have been added to the XCode project.
For the latest release of OpenFeint, if you have set your ‘iPhoneOS Deployment Target’ to any version before 3.0 you must weak link some libraries. Select ‘Targets’ in the Groups & Files pane.
* Right click your target and select Get Info.
* Select the ‘General’ tab.
* Under ‘Linked Libraries’ change the following libraries from ‘Required’ to ‘Weak’
* UIKit
* MapKit
For my project I’m not building for lesser versions, which from what I’ve heard though does exclude a sizable base of users.
Finally, in step 9 of the README.txt, we need to add the following line to our prefix header:
#import "OpenFeintPrefix.pch"
At this point we now have the SDK integrated within our XCode project along with some necessary support requirements. Again, this tutorial covers the specific integration of OpenFeint with a Cocs2D-iPhone v8.2 based application. As with most application architectures, there will be various class files such as the application delegate, game scene, and menu scene.
Let’s now add the required OpenFeint code to our project’s AppDelegate.
Within the AppDelegate’s main file we’ll add various code elements ranging from importing other class headers to specific code needed within existing methods. For example, there’s specific OpenFeint code that should be added to applicationWillResignActive, applicationDidBecomeActive, and applicationWillTerminate. These methods can already be found in your Cocos2D application delegate, and we’ll be adding a line here or there to support integration with OpenFeint. The OpenFeint developer documentation details these requirements as it’s expected that you’re taking action based upon the application’s state, such as it shutting down.
Add the following import statements to your AppDelegate’s main file:
#import "OpenFeint.h"
#import "MyOFDelegate.h"
Now locate the respective methods within your AppDelegate’s main file, and add the lines identified for OpenFeint.
- (void)applicationWillResignActive:(UIApplication *)application {
[[SimpleAudioEngine sharedEngine] stopBackgroundMusic];
[[Director sharedDirector] pause];
[OpenFeint applicationWillResignActive]; // Add for OpenFeint
}
- (void)applicationDidBecomeActive:(UIApplication *)application {
[[Director sharedDirector] resume];
[OpenFeint applicationDidBecomeActive]; // Add for OpenFeint
}
- (void)applicationWillTerminate:(UIApplication *)application {
[[SimpleAudioEngine sharedEngine] stopBackgroundMusic];
[[Director sharedDirector] end];
[[NSUserDefaults standardUserDefaults] synchronize]; // Add for OpenFeint
}
Next we’ll create a class that will handle some crucial tasks whenever we call the OpenFeint leaderboard. At the time, I had based this on the included MyOFDelegate files. Pay particular attention to the dashboardDidAppear and dashboardDidDisappear methods. You’ll see that we’re momentarily pausing the Cocos2D director and then re-enabling it once the dashboard disappears. This is a critical step cause otherwise it’s possible that input will be inconsistent or even not captured when the dashboard is displayed. But by pausing the director, we’re ensured that all user input is captured by the dashboard.
Create the following files within your XCode project.
MyOFDelegate.h
//
// MyOFDelegate.h
// MyGame
//
#import "OpenFeintDelegate.h"
@interface MyOFDelegate : NSObject<>
- (void)dashboardWillAppear;
- (void)dashboardDidAppear;
- (void)dashboardWillDisappear;
- (void)dashboardDidDisappear;
- (void)userLoggedIn:(NSString*)userId;
- (BOOL)showCustomOpenFeintApprovalScreen;
@end
MyOFDelegate.m
// MyOFDelegate.m
#import "OpenFeint.h"
#import "MyOFDelegate.h"
#import "cocos2d.h"
@implementation MyOFDelegate
- (void)dashboardWillAppear
{
}
- (void)dashboardDidAppear
{
[[Director sharedDirector] pause];
[[Director sharedDirector] stopAnimation];
}
- (void)dashboardWillDisappear
{
}
- (void)dashboardDidDisappear
{
[[Director sharedDirector] resume];
[[Director sharedDirector] startAnimation];
}
- (void)userLoggedIn:(NSString*)userId
{
OFLog(@"New user logged in! Hello %@", [OpenFeint lastLoggedInUserName]);
}
- (BOOL)showCustomOpenFeintApprovalScreen
{
return NO;
}
@end
For my particular Cocos2D application, I have a splash scene that quickly shows the company logo, the Cocos2D logo, and then transitions to the game’s main menu. I initialize OpenFeint during this process just before loading the main menu. With that said, the header file for the splash scene class has the following code. We’re including the MyOFDelegate class as well as declaring the ofDelegate variable. I’ve streamlined the content to only include the references to the OpenFeint code. I’ve left the reference to the menuScene method though as that’s where I perform the initialization.
// SplashScene.h
// Balloon-Boy
//
// Created by Tim Sills on 12/1/09.
//
#import
#import "cocos2d.h"
@class MyOFDelegate; // Add for OpenFeint
@interface SplashScene : Scene {
MyOFDelegate *ofDelegate; // Add for OpenFeint
}
-(void)menuScene;
@end
For the main file of the splash scene class, we import the following header files:
#import "OpenFeint.h"
#import "MyOFDelegate.h"
For the initialization process, we declare a dictionary that will include the details specific to our application. Where ever it is you want to perform the initialization within your application, add the following code.
NSDictionary* settings = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:UIInterfaceOrientationLandscapeRight], OpenFeintSettingDashboardOrientation, [NSNumber numberWithBool:YES],
OpenFeintSettingDisableUserGeneratedContent, nil];
ofDelegate = [MyOFDelegate new];
OFDelegatesContainer* delegates = [OFDelegatesContainer containerWithOpenFeintDelegate:ofDelegate];
[OpenFeint initializeWithProductKey:@"aMMPDEYldUia4j86uV1mK"
andSecret:@"PMjb1n9dV4MVO09U05R56MUCeJf7VZHnKlQIvRKtn"
andDisplayName:@"My Game"
andSettings:settings // see OpenFeintSettings.h
andDelegates:delegates]; // see OFDelegatesContainer.h
For my particular application, once the initialization has been done, it then transitions to the main menu. If you’re already a registered user you’ll get the welcome back screen or if you’re a new user (or perhaps using a new device), you’ll have the opportunity to either login or register for a new account.
When a user selects the Scores option at the main menu, the following line of of code is executed within the called method. Take note of the text passed text value. This is the lD for our leaderboard that was assigned at the time of creation, which in this case is reflecting the value for the test application registered for this tutorial.
[OpenFeint launchDashboardWithHighscorePage:(NSString*)@"124567"];
The code is pretty self explanatory and launches the high score dashboard for our application. You also have the option of defaulting to other screens when initially launching the OpenFeint dashboard, so be sure to check the SDK for such details.
This all sounds great, but the next question is how do we post these scores to OpenFeint? The process is incredibly simple. For my Balloon-Boy game, after the user’s piloted balloon has had three impacts, the game is over. At this point I give the player the opportunity to play again, but when this method is initially called, I execute the following bit of code to post the user’s score. In particular, I have a variable named currentScore that contains the user’s accumulated score. The currentScore’s value is posted to the OpenFeint leaderboard. It is also important to note the leaderboard ID again. This is the same value given to us when we first created the leaderboard in the developer portal. The leaderboard details are provided again for reference in the following screen capture.
The leaderboard ID is used to identify the specific board for the application that’ll receive the value. The following shows the single line of code used to post the current score to the appropriate leaderboard for our application.
[OFHighScoreService setHighScore:currentScore forLeaderboard:@"124567 " onSuccess:OFDelegate() onFailure:OFDelegate()];The following screen capture shows in my application where the user is presented with the opportunity to either play again or return to the main menu. When this overlay is initially shown, there is a brief confirmation message shown at the bottom of the screen as the score is posted to the OpenFeint leaderboard.
The integration process is nearly complete with only some minor additional changes required. In order to compile the OpenFeint C code, we have to change the extension all of our main class files from .m to .mm. Meaning, gameScene.m now becomes gameScene.mm. There’s no impact to the existing Objetive-C code, but if not done then there will be a lot of problems when attempting to compile the code otherwise.
If you like this article please bookmark us. We will come soon with new article which helps you to develop your games in iPhone.