Sunday, January 24, 2010

Core Data

I was fortunate to attend the iPhone Tech Talk conference, hosted by Apple Inc. in San Jose last October, 2009. One of the sessions I attended was on Core Data. At a very high level, Core Data is an Apple™ software framework for storing data on the computer - namely, the iPhone and the Mac. Please note that I don't plan on covering database design theory in this post so I'm assuming a familiarity with it.

Core data is an Object graph management and persistence framework. Core data maintains a graph of objects (otherwise known as a managed object graph or managed object context). References to managed objects are stored in the object graph. Core Data serializes these objects and persists them to a data store. Core data can use several different underlying data stores - xml, binary, sqlite. The managed object context is a key concept in Core Data, as it provides a proxy between the application objects and the underlying data store(s). The persistent store coordinator is responsible for managing the relationship between the managed object context and the persistent store(s). A managed object context wraps a collection of managed objects and sits on top of a persistent store coordinator which houses one or more persistent stores. A managed object context is created via an instance of the NSManagedObjectContext class.

It's important to note that the Core Data sqlite file should not be accessed directly from within code, as the schema may change and it's not intended for direct manipulation.

In Summary, a managed object context is a collection of managed objects. Managed objects are instances of the entities that are defined in the data model. A managed object represents a row of data in a table. The managed objects context sits on top of a persistent store coordinator. The managed object context is a proxy between the objects in the application and the underlying data store(s). A persistent object store is represented by a xml, binary, or sqlite file. The persistent store coordinator sits between the managed object context and the persistent store(s). The persistent store coordinator in turn abstracts one or more persistent object stores as a single aggregate. Please note, a persistent store coordinator can only be associated with one managed object context. But a persistent store coordinator can have many persistent object stores. The managed object context talks directly to the persistent store coordinator. The managed object context is a scratchpad of managed objects.
After working with Core Data, it is a worthwhile exercise to open the core data sqlite file and dig through the tables. A quick look through the tables will explain how Apple is handling referential integrity. The Core Data sqlite file is "not meant" to be accessed directly from within code. Do not modify it or attempt to write queries against it. There is no guarantee that the schema will not change. Core Data is meant to abstract implementation dependent details (i.e. the implementation may change upon any future releases).

The following notes were derived from three sources: Apple WWDC 2009, the iPhone Core Data session in San Jose last October, and my experience working with object/relational databases.

Creating a Core Data Stack

  1. Load the Data Model
  2. Create the persistent store coordinator and set the model
  3. Add the data store
  4. Create a managed object context and set the persistent store coordinator

Core Data Model

  1. A Core Data model does not need to be normalized. Define data contracts early on. Don't over tighten the data model - design with usage patterns in mind. Define entities or tables with Xcode's visual editor. (core data objects represent rows in these tables). In other words, a Core Data object (row of data) of type PERSON is an instance of the PERSON entity in the Core Data model. Of course, a person object will have relationships with other objects in the database. A Core Data Managed object context manages these objects, their relationships, and persists them to the underlying data store (via the persistent store coordinator). Core Data is simple to setup. Don't complicate it.
  2. NSManagedObjectContext defines the verbs (CRUD or fetch/insert/delete/save) that act on managed objects (nouns)
  3. NSManagedObjectContext tracks changes to properties in the data model
  4. Core Data fits very well in the MVC architecture.
  5. An NSPersistentStoreCoordinator can have multiple persistent stores(NSPersistentStore) and multiple Managed Object Contexts can use a single NSPersistentStoreCoordinator
  6. It is possible to delete the sqlite store file if the stored data needs to be deleted. This file is stored in $HOME/Library/Application Support/iPhone Simulator/User/Applications/SOMEAPPLICATIONGUID/Documents
  7. Use Core Data migration or built-in versioning for model changes. easy to use. zero code.

Multithreading

  1. Core data assigns one managed object context per thread; use notifcations for changes made to managed objects within a managed object context.
  2. never pass managed objects between threads, pass object IDs (actually the primary key)
  3. within a thread, create a new Managed object context and then tell the new Managed object context what objects IDs to fetch

Fetching

    Use NSFetchedResultsController (break up fetches into sections) - it is fast, efficient, and easy.
  1. Create an NSFetchedResultsController
  2. Set the fetch request, predicate, and sort descriptors. fetch request provides predicate and sort descriptors). fetch request and predicate are immutable.
  3. the keypath returns the section name (section information is cached)
  4. set yourself as its delegate and implement your tableview methods
  5. Create separate controllers for different data sets

NSFetchRequest

  1. set the entity that you want to fetch against
  2. create NSEntityDescription - provide managed object context and entity you want to work with
  3. set the predicate (if applicable)
  4. Fetch - pass in fetch and error

Batching

  1. setup fetch request (only need some of the objects at a time so setBatchSize:)
  2. set batch size
  3. you get back an NSArray of results

Prefetching

  1. use setRelationshipKeyPathsForPrefetching:

Managed Object Interaction

    NSManagedObject
  1. Use accessors
  2. Objective-C properties
  3. To Many relationships are sets
  4. NSManagedObjects provides Key value coding/observing out of the box
  1. Managed Object Context observes object changes. Leverage change tracking notifcations
  2. Register for Object Level Changes (KVO)
  3. Register for Graph Level Changes (NSNotifications)


Iterate over Custom UITableViews

UITableViews on the iPhone often contain a variable number of cells. The following code snippet should be helpful for those who need to iterate over a variable length table and subsequently read or set data model and cell state.
   //loop through cells in each section and make sure the data model is in sync 
   NSInteger numSections = [self numberOfSectionsInTableView:someTable];
   for (NSInteger s = 0; s < numSections; s++) { 
      NSLog(@"Section %d of %d", s, numSections); 
      NSInteger numRowsInSection = [self tableView:someTable numberOfRowsInSection:s]; 
      for (NSInteger r = 0; r < numRowsInSection; r++) {
         NSLog(@"Row %d of %d", r, numRowsInSection); 
         MyCell *cell = (MyCell *)[someTable cellForRowAtIndexPath: 
                                  [NSIndexPath indexPathForRow:r inSection:s]]; 
         DataModelObject *theObject = (DataModelObject *)[fetchedResultsController objectAtIndexPath:
                                                         [NSIndexPath indexPathForRow:r inSection:s]]; 
         [theObject setValue:[NSNumber numberWithBool:MyCell.someProperty] forKey:@"someEntityKey"]; 
      } 
   }


Monday, December 7, 2009

Mobile Technology - Where Are We Going?

OS X and iOS contain open source components.
  1. Open Source Components that ship with Mac OS X 10.6.2 and their corresponding Open Source Projects (src avail for dl/browse) - http://www.apple.com/opensource/
  2. Open Source Components that ship with Mac OS X 10.6.2, iPhone OS 3.1.2 and Developer Tools 3.2.1 (source avail for dl) Also browseable by prior versions - http://opensource.apple.com
The open source components are typically released under various open source licenses, such as the Apache License, the GNU General Public License, and the BSD License, among others. By including open source components, Apple is able to leverage the work of the open source community and provide a more robust and secure platform for its users. The open source components can also be used by developers to build applications for macOS and iOS, as well as for other platforms.

On to Android...so what is it? Android is an open-source mobile operating system. It uses a modified Linux 2.6 Kernel - tuned for embedded devices. To quote the Android Web site - "Android relies on Linux version 2.6 for core system services such as security, memory management, process management, network stack, and driver model. The kernel also acts as an abstraction layer between the hardware and the rest of the software stack." (http://developer.android.com/guide/basics/what-is-android.html).

Linux is known for its stability, security, and flexibility. It has become a popular choice for servers, workstations, and even desktops. Linux is widely used in the cloud computing industry and powers many of the world's largest websites and services. It has a strong developer community and a vast selection of open-source software available for free. Additionally, Linux is highly customizable, allowing users to modify and tailor their operating system to their specific needs.

Linux (GNU/Linux) is a UNIX-like operating system that comes in many flavors or distributions. The distributions typically differ in how package (applications) distribution is handled. Oftentimes, a custom or stripped down kernel is shipped with the distribution. There are over 600 GNU/Linux distributions. They run on a myriad of devices from embedded controllers to large scale grids and supercomputers.  GNU/Linux has been around a while - Linuz Torvalds coined the term in '91. The GNU Project, created by Richard Stallman in 1983, had the goal of creating a complete Unix-compatible software system composed entirely of free software". Hence came GNU/Linux. Read more here - http://www.gnu.org/.

To clarify, the GNU/Linux operating system consists of the Linux kernel (created by Linus Torvalds in 1991) and GNU software, which was developed by the GNU Project (launched by Richard Stallman in 1983) and includes tools such as the GNU C Compiler (GCC), GNU Debugger (GDB), and Bash shell.

The Linux kernel was based on the Minix operating system and Unix principles, but it was not directly derived from SVR4 or BSD. It was created as a free and open-source alternative to commercial operating systems and has since become widely used on servers, desktops, and mobile devices.

The process, thread, and memory models in GNU/Linux are based on the Unix principles and have been tested and refined over several decades. They are known for their stability and scalability, making GNU/Linux a popular choice for high-performance computing and enterprise applications.

Many GNU userland tools, such as Bash, GCC, GNU coreutils, and GNU Emacs, are included in most GNU/Linux distributions. These tools are often used by developers and system administrators to perform various tasks, such as writing and compiling code, managing files and directories, and configuring system settings. The availability of these tools on GNU/Linux systems is a result of the close relationship between the GNU Project and the Linux kernel.

Android uses a modified version of the Linux kernel, which is a Unix-like operating system kernel, but its userland libraries are not based on GNU/Linux. Instead, Android uses a custom, BSD-like userland developed by Google. This allows Android to have more control over the code and reduce dependencies on GNU/Linux. However, Android still maintains compatibility with many GNU/Linux tools and libraries.

As opposed to Objective-C, which developers use to write applications for the iPhone, Java is used for writing applications on Android devices. Android provides libraries on top of the kernel for 2D and 3D rendering, type support, sqlite access, media, libc (actually called Bionic), etc. While the Linux kernel provides memory management, process and threading support, etc., Google uses a byte-code interpreter (well, not exactly, but similar) called DalvikVM which transforms java class files into a second type of byte code format - .dex. An Android application gets loaded into a single process and is allocated a DalvikVM instance. DavlikVM was chosen because of its memory/processor efficiency on embedded devices.

Android also provides an application framework that allows developers to use various pre-built components such as activities, services, content providers, and broadcast receivers to build their applications. These components help developers to manage the lifecycle of their applications and handle various system events, such as incoming phone calls or text messages.

The Android SDK (Software Development Kit) provides all the necessary tools and libraries for developers to build, test, and debug their applications. The SDK includes an emulator, which allows developers to test their applications on a virtual device before deploying them on an actual Android device.

In addition, Android is built with security in mind. It includes various security features such as app sandboxing, which isolates each application and its data from other applications, and permissions, which allow users to control what data and resources an application can access on their device.

In 2007, Google created the open handset alliance with the following objective: "The Open Handset Alliance is a group of 47 technology and mobile companies who have come together to accelerate innovation in mobile and offer consumers a richer, less expensive, and better mobile experience. Together we have developed Android™, the first complete, open, and free mobile platform. We are committed to commercially deploy handsets and services using the Android Platform (http://www.openhandsetalliance.com)". By working together, the alliance aims to create a more open and innovative mobile ecosystem that benefits both developers and consumers.

The list of member companies is huge including but not limited to, TI, NVidia, Quallcomm, Acer, Sprint, Toshiba, LG, etc. Shortly thereafter, in 2008, Google released the Android source code under an Apache license. This move attracted and will continue to attract top developers from all over the world who are comfortable with the Java programming language in a Linux type environment.

By releasing the Android source code under an Apache license, Google allowed developers to modify and distribute the code, which led to the creation of numerous custom ROMs (modified versions of the Android operating system) and a thriving community of developers. It also allowed manufacturers to use the Android operating system on their devices without any licensing fees, which helped to accelerate the adoption of Android.

Google's initiative with the release of the Android source code and the formation of the Open Handset Alliance were huge moves. The Open Handset Alliance will aid its member companies with deploying the Android operating system on their respective devices. In other words, we will see the Android operating system on many different brands of mobile phones.

The open-source nature of Android and the formation of the Open Handset Alliance have allowed for the widespread adoption of the Android operating system across many different mobile devices from various manufacturers. This has helped to create a more competitive and diverse mobile market, which ultimately benefits consumers.

By the end of 2009, there will be over 15 mobile devices with the Android operating system. There are over 15,000 apps on the Android market (as of November 2009). There were 10,000 apps on the Android Market 2 months ago. There are over 100,000 apps on the Apple app store (as of November 2009). The Apple app store is 16 months old. There have been over 2 billion total downloads since inception.

The Android operating system is solid and it isn't going anywhere. Here are some good resources on the Internet.

Getting Started with Android Development

The GNU Operating System

Free Software Foundation

Open Handset Alliance

What is Android?

Dalvik Virtual Machine insights

Android Notes

Inside the Android Application Framework

Saturday, April 4, 2009

FreeBSD remote kernel testing

FreeBSD 7 remote kernel testing

If the host is located at a remote location and is running a custom built kernel, then there is a handy feature in FreeBSD that will allow testing of the new kernel without breaking the entire system.

The commands (including the make command for the kernel) are as follows:

# cd /boot

# cp -R kernel kernel.good

# cd /usr/src

This will install the kernel in /boot/kernel

# make KERNCONF=MYKERNELNAME buildkernel

# cd /boot

# mv kernel kernel.mykernelname

# mkdir kernel

# cp kernel.good/* kernel/

# nextboot -k kernel.mykernelname

 

Upon reboot, the system will load kernel.mykernelname and then erase the part of the configuration that told it to load kernel.mykernelname.

Consequently, subsequent reboots will load the kernel located in /boot/kernel which is the original kernel.

Assuming that kernel.mykernelname loaded successfully, run the following commands to make the new kernel permanent:

# mv /boot/kernel /boot/kernel.previous

# mv /boot/kernel.mykernalname /boot/kernel

Friday, April 3, 2009

Plone and Apache on FreeBSD 7 behind PF

Plone 3.2.1 and Apache 2.2 on FreeBSD 7.2 behind PF

In 2001, PF was not yet integrated into the OpenBSD kernel.  OpenBSD 3.0 soon prevailed (2001) and PF was included in the kernel.  Soon thereafter (2003), PF was incorporated into the FreeBSD 5.3 kernel.   For those who are unfamiliar, PF is a system for filtering TCP/IP traffic and providing network address translation. However; PF also provides network traffic shaping capabilities - packet prioritization, bandwidth control, and TCP/IP conditioning.   My original article from 2000 explained how to setup an IP-less bridge on an OpenBSD 2.8 server running IP Filter  with dual network interface cards. The bridge filtered traffic at the data link layer and was invisible at the internet protocol level.  PF, like IP Filter, is very powerful.  While I will not be going into how to configure an IP-less bridge, the PF configuration that follows is straightforward and easily adaptable.

This document shows an example configuration of a PF ruleset and an Apache 2.2 installation in front of a Plone 3.2.1 instance on FreeBSD 7.2.  SSH tunnelling is used for remote management of the Zope/Plone instance (i.e. ZMI).

Pre-installation Requirements

  • FreeBSD 7.2-PRERELEASE w/ PF enabled
  • Apache 2.2 from (from ports) with mod_ssl (OpenSSL 0.9.8j), mod_proxy, and mod_rewrite
  • Zope w/ Plone 3.2.1
  • Varnish HTTP Accelerator

Zope is bound to an unprivileged port on localhost.  Apache is bound as a non-privileged user to port 80 and 443 on public IP address X.X.X.Y.  Zope/Plone can be running standalone or in a ZEO Server / ZEO client configuration.  In either case, Apache will function as a reverse proxy and send http requests to Zope.

Enable IP Forwarding in the Kernel

Enable IP forwarding in the kernel:

# sysctl net.inet.ip.forwarding=1
# sysctl net.inet.ip.fastforwarding=1
# sysctl net.inet6.ip6.forwarding=1

Add the following lines to /etc/sysctl.conf so that when the host is rebooted, IP forwarding is enabled:

# /etc/sysctl.conf

net.inet.ip.forwarding=1
net.inet.ip.fastforwarding=1
net.inet6.ip6.forwarding=1
kern.ipc.somaxconn=4096
kern.ipc.nmbclusters=32768

As an alternative, add the following to /etc/rc.conf

# /etc/rc.conf

gateway_enable="YES"

 

Enable HTTP Accept Filter

Next, make sure that the HTTP Accept filter is loaded into the kernel.

Check this by running the following command:

# kldstat

2 1 0xc0b12000 2464 accf_http.ko

If the filter is not loaded, edit /boot/loader.conf and add the following line so that when the host is rebooted, the HTTP Accept filter kernel module is loaded.

# /boot/loader.conf
accf_http_load="YES"

Last of all, to load the module without rebooting, run the following command:

kldload accf_http

 

System V Shared Memory and Semaphore Parameters

Modify System V shared memory and semaphore parameters

# sysctl kern.ipc.shmall=32768
# sysctl kern.ipc.shmmax=134217728
# sysctl kern.ipc.semmap=256

Make these changes permanent by adding the following to /etc/sysctl.conf

# /etc/sysctl.conf

kern.ipc.shmall=32768
kern.ipc.shmmax=134217728
kern.ipc.semmap=256
net.inet.ip.forwarding=1
net.inet.ip.fastforwarding=1
net.inet6.ip6.forwarding=0
kern.ipc.somaxconn=4096
kern.ipc.nmbclusters=32768

 

System V "Read-Only" Semaphore Parameters

Modify the System V "Read-Only" Semaphore Parameters by adding the following to /boot/loader.conf  

NOTE: Reboot for the new values of these parameters to take effect

# /boot/loader.conf

kern.ipc.semmni=256
kern.ipc.semmns=512
kern.ipc.semmnu=256
accf_http_load="YES"
net.inet.tcp.syncache.hashsize=1024
net.inet.tcp.syncache.bucketlimit=100
net.inet.tcp.tcbhashsize=4096
net.inet.tcp.syncache.cachelimit=102400

 

PF Configuration

# /etc/pf.conf
#

## MACROS-----
ext_if="bge0"
set loginterface $ext_if
local_networks = "{ a.a.a.b/24, c.c.c.d/24, e.e.e.f/26}"
internet_ports = "{80, 443}"

# Table Setup
# /etc/iface_addresses contains the following
# X.X.X.Y
# X.X.X.Z
table <iface_addresses> persist file "/etc/iface_addresses"
table <bruteforce> persist

# set Block Policy option
set block-policy return

# set Skip Filtering option on localhost
set skip on lo0

scrub in all
antispoof quick for $ext_if inet

# block ip addresses contained in bruteforce table
block in log (all, to pflog0) quick on $ext_if from <bruteforce> to any

# block and then log outgoing packets that don't have one of our IPs as the source IP
block out log (all, to pflog0) quick on $ext_if from ! <iface_addresses> to any

# block nmap scans
block in log (all, to pflog0) quick on $ext_if inet proto { tcp, udp } from any to any flags FUP/FUP

# block everything by default
block in on $ext_if all

# pass in icmp and keep state
pass in quick on $ext_if inet proto icmp all keep state

# pass in tcp traffic from localhost
pass in quick on $ext_if proto tcp from 127.0.0.1 to <iface_addresses>

# pass in traffic on internet ports
pass in on $ext_if proto { tcp, udp } from any to <iface_addresses> port $internet_ports flags S/SA keep state

# throttle ssh connection attempts and block their ip if a bruteforce attempt is detected
pass in quick on $ext_if proto tcp from any to any port ssh \
flags S/SA keep state \
(max-src-conn 15, max-src-conn-rate 5/3, \
overload <bruteforce> flush global)

# allow traffic from interface addresses to localhost
pass out quick on $ext_if from <interfaces_addresses> to 127.0.0.1

# allow local network admin ip addresses
pass in on $ext_if proto { tcp, udp } from $local_networks to $ext_if

# keep state on outbound connections made from one of the ip addresses on interface
pass out on $ext_if proto tcp from any to any flags S/SA modulate state
pass out on $ext_if proto { udp, icmp } from any to any keep state


 Apache Configuration

# /usr/local/etc/apache22/httpd.conf
# ----------------------------------


Listen X.X.X.Y:80 # Default
Listen X.X.X.Z:80 # VHost
Listen X.X.X.Z:80:443 # VHost

HTTP Virtual Hosts

# /usr/local/etc/apache22/Includes/httpd-vhosts.conf
# --------------------------------------------------
NameVirtualHost X.X.X.Z:80

<VirtualHost X.X.X.Z:80>
ServerName DOMAIN.com
ServerAlias www.DOMAIN.com
ServerAdmin info@DOMAIN.com
ServerSignature On
RequestHeader set Front-End-Https "On"
ProxyRequests Off
ProxyPreserveHost On

ErrorLog "/var/log/DOMAIN-error_log"
CustomLog "/var/log/DOMAIN-access_log" common
LogLevel warn

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteLogLevel 2
RewriteRule ^/icons/ - [L]
RewriteRule ^/(.*)/manage(.*) \
https://DOMAIN.com/$1/manage$2 [NC,R=301,L]
RewriteRule ^/manage(.*) \
https://DOMAIN.com/manage$1 [NC,R=301,L]
RewriteRule ^/login_(.*) https://%{SERVER_NAME}/login_$1 [NE,L]
RewriteRule ^/(.*) \
http://127.0.0.1:8902/VirtualHostBase/http/%{SERVER_NAME}:80/MyPloneSite/VirtualHostRoot/$1 [P,L]
</IfModule>
<IfModule mod_proxy.c>
ProxyVia On
ProxyPass / http://127.0.0.1:8902/VirtualHostBase/http/%{SERVER_NAME}:80/MyPloneSite/VirtualHostRoot/
ProxyPassReverse / http://127.0.0.1:8902/VirtualHostBase/http/%{SERVER_NAME}:80/MyPloneSite/VirtualHostRoot/
<ProxyMatch http://127.0.0.1:*/.* >
Order deny,allow
Deny from all
Allow from X.X.X.Y
</ProxyMatch>
<LocationMatch "^[^/]">
Deny from all
</LocationMatch>
</IfModule>
</VirtualHost>

SSL Virtual Hosts

# /usr/local/etc/apache22/Includes/httpd-ssl.conf

NameVirtualHost X.X.X.Z:443

<VirtualHost X.X.X.Z:443>
DocumentRoot "/usr/local/www/apache22/data"
ServerName DOMAIN.com
ServerAdmin info@DOMAIN.com
ErrorLog "/var/log/DOMAIN-ssl-error.log"
TransferLog "/var/log/DOMAIN-ssl-access.log"

SSLEngine on
SSLCipherSuite ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+SSLv3:+EXP:+eNULL
SSLCertificateFile "/usr/local/etc/apache22/ssl.crt/DOMAIN.com.crt"
SSLCertificateKeyFile "/usr/local/etc/apache22/ssl.key/DOMAIN.com.key"

<FilesMatch "\.(cgi|shtml|phtml|php)$">
SSLOptions +StdEnvVars
</FilesMatch>
<Directory "/usr/local/www/apache22/cgi-bin">
SSLOptions +StdEnvVars
</Directory>
BrowserMatch ".*MSIE.*" \
nokeepalive ssl-unclean-shutdown \
downgrade-1.0 force-response-1.0

CustomLog "/var/log/cfhinton-ssl_request.log" \
"%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b"

<IfModule mod_rewrite.c>
RewriteEngine On
RewriteLogLevel 2

RewriteRule ^/(.*) \
http://127.0.0.1:8902/VirtualHostBase/https/%{SERVER_NAME}:443/MyPloneSite/VirtualHostRoot/$1 [L,P]
</IfModule>
<IfModule mod_proxy.c>
ProxyVia On
ProxyPass / http://127.0.0.1:8902/VirtualHostBase/http/%{SERVER_NAME}:80/MyPloneSite/VirtualHostRoot/
ProxyPassReverse / http://127.0.0.1:8902/VirtualHostBase/http/%{SERVER_NAME}:80/MyPloneSite/VirtualHostRoot/
<ProxyMatch http://127.0.0.1:*/.* >
Order deny,allow
Deny from all
Allow from X.X.X.Y
</ProxyMatch>
<LocationMatch "^[^/]">
Deny from all
</LocationMatch>
</IfModule>
</VirtualHost>

SSH Tunnel (access ZMI via http://localhost:9999

ssh -f user@mydomain.com -L 9999:localhost:8095 -N 
 

Monday, December 29, 2008

Building PHP5 on Linux

 ./configure options for PHP 5.2.6 Build on Linux running 2.6.18-92 Kernel

# ./configure --disable-static --disable-debug  --prefix=/usr/local/apache2/php --with-config-file-scan-dir=/usr/local/apache2/php --enable-libxml  --with-libxml-dir=/usr/local/lib  --enable-reflection --enable-spl --enable-zend-multibyte --with-regex=system  --with-tidy  --enable-zip --enable-bcmath --with-bz2=shared --enable-calendar --with-curl=shared --enable-dba --enable-exif --enable-ftp --with-gd --enable-gd-native-ttf --with-jpeg-dir=/usr --with-png-dir=/usr --with-zlib-dir=/usr --with-gettext=shared  --with-gmp=shared --with-imap-ssl --with-imap --enable-mbstring --with-mcrypt=shared --with-mhash=shared --with-mysql --with-mysqli --with-openssl-dir --with-pdo-mysql --enable-sockets --with-xsl --with-zlib --with-apxs2=/usr/local/apache2/bin/apxs --disable-cgi --enable-pcntl --enable-soap --enable-dbase --enable-sysvmsg --enable-sysvsem --enable-sysvshm   --with-zlib  --with-gdbm  --with-curl --enable-soap --with-kerberos

 # make && make install

# chown -R apache.web /usr/local/apache2/

Monday, March 10, 2008

Libpq vs Libpqxx

Libpq is the C application programmer's interface to PostgreSQL, and Libpqxx is the C++ application programmer's interface to PostgreSQL. Libpqxx actually wraps the functions in Libpq. However, Libpqxx is slower than Libpq, and Libpqxx 2.6.9 is not compatible with PostgreSQL 8.3 and GCC 4.1.x on Red Hat / Fedora 5.

After using Libpqxx for the past 5 years on a PostgreSQL 7.4 database, I migrated the database to 8.3 and Libpqxx to the current stable version - 2.6.9. I am running dual quad-core Xeon processors on a 64-bit Red Hat Enterprise 5 Server with GCC 4.1.x. Libpqxx is just a wrapper library around Libpq. There are performance issues because Libpq can be called directly without incurring object / template overhead. After deciding to migrate our codebase to use Libpq, I was very pleased with the results. Error checking is simple, code execution is faster, and library / executable sizes were reduced.