Monday, July 3, 2023

Intellij JAVA_HOME environment variable is not defined correctly when running mvn

 If you get an error:

C:\>mvn -version
The JAVA_HOME environment variable is not defined correctly
This environment variable is needed to run this program 

NB: JAVA_HOME should point to a JDK not a JRE 


One option is to ensure your pom.xml file has the java details


In the example below, using OpenJDK version 20 which is located C:\Program Files\Java\jdk-20\bin

In Intellij 2023 1.3, you can verify the "20" under File>Project Structure>SDK

<properties>
<maven.compiler.source>20</maven.compiler.source>
<maven.compiler.target>20</maven.compiler.target>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
</properties>

Tuesday, October 20, 2020

Locating Uranus from Aries - Oct 2020

Locating Uranus from Aries with Binoculars and a little help from the planet Mars in light polluted skies (Oct 2020)

Locate the bright stars of Hamal, Sheratan and Mesarthim in Constellation Aries by looking to the left of a very bright Mars in evening skies.

From Mesarthim, locate Iota Arietis just to the right and then count down 5 lesser bright stars to locate 19 Arietis. If 19 Arietis is in the upper right of your binoculars, then Uranus should be somewhere in view in the lower left.…

See More
Image may contain: night, text that says 'Sheratan M74 Mesarthim 1 lota Arietis Mars 19 Arietis -Uranus 29 Arietis'
Like
Comment
Share

Wednesday, April 15, 2020

Implementing Innovative Ideas's: As an internet customer, I want my modem to be smart enough to auto-renew credentials so that I have reliable services

[The user story could also be versed from a developer's perspective as: "As a developer, I want the modem to be smart and renew its expiring credentials so that my time can be better utilized"]

In the middle of PI execution, a developer has an innovative idea to make the modem a smart device so that it knows when it's credentials/security certificate is expiring, it is aware that it will have connectivity issues.
The device can be made smart enough to reach out to an end server and says "hey, my credentials is expiring in a few months. I think I need to renew my credentials"
Assuming the end server has all the connectivity to the authentication servers and certificate authority, it can determine if the device is eligible for renewal and sends back a renewal.
The device receives the renewed credentials (maybe sends a thank you note (acknowledgement) to the end server and installs or updates the renewed credentials.

Assume the organization is using a scaled agility framework. Although estimates for implementing are provided in the PI planning, the team(s) focus on business objectives, intent and alignment around outcomes, then they may find that the actual implementation plan and process are not detailed in the PI Planning regardless of any pre-PI planning. The objective may be to 'get all modems renewed with updated credentials'. The business outcome would be to ensure the reliability of devices and ensuring device connectivity by automating credential renewals.

So the question comes up how were the feature estimates provided without the implementation plan? Assuming the feature estimate was based on top-down, historic info, nonetheless, the team also provides estimates that need to co-relate.

Feature estimates are usually top-down, refined and then co-related with user stories from bottom up. So for PI Planning an estimate is provided, an implementation plan is discussed across multiple supplier, manufacturers, and any other teams.
However, during the PI execution, a better more innovative idea arises in a developer or engineer.

So another question, does the team take it up with product management and business owners and re-look the implementation? The PI Planning board is expected to have laid out all dependencies (remember the red ribbons) so if one team is changing that another team has a dependency on it impacts all teams.
How would the idea be taken forward to get it implemented cross-supplier, cross-teams. Would the PM and Business owner be required to take the idea forward to all the teams?

Remember, teams have already submitted estimates for the PI objective and capacity planning is completed.

The best answer to the questions, may be Communicate! Communicate! Communicate! PO Sync, Scrum master sync, System demo's

Perhaps the best answer around estimation of user stories is to use generic sizing like t-shirts. How much of the work is uncertain, how complex, how many resources and then determine your Fibonacci based estimated based on size. Recall a user story follows 3C's - Card, Conversation, Commitment.

The actual acceptance criteria will get to be known only during the team's backlog refinement in iteration execution. Recall that acceptance criteria are the finer details of the implementation.



Note that SAFe is a copyright and trademark of scaled agile framework and more info can be found at http://www.scaledagileframework.com



Friday, March 27, 2020

Agile frameworks: Supporting "Maneuverability" of Agile Teams to support urgent last minute requirements vs "Tyranny of the Urgent"

After working on a large scale SAFe installation and observing the issues with the implementation, I came upon the understanding that for scaling agile, support for urgent last minute requirements is needed . I call it "Maneuverability" of Agile teams. And it means supporting just-in-time planning even right before the start of a n increment (such as SAFe PI) and even during a planning session (such as SAFe PI Planning) and even if it means shifting in/out of epics/user stories during an incremental delivery

Although frameworks like SAFe support uncommitted objectives or variability in a program increment, these objectives are still pre-defined and planned for although the outcomes are uncertain. Perhaps a certain capacity for variability or uncommitted objective needs to be added or planned in each PI to support urgent needs.

It helps to remember that  frameworks such as Scaled Agile usually have very definite requisites on team size, train size and processes. Nonetheless  one should be able to adapt it to one's needs. And seriously, even large corporations such as in the Aerospace or Telecom industries work with smaller suppliers to get services and these suppliers are often dedicated to multiple customers.  So to avoid the framework becoming another "tyrannical framework" with processes to be followed to the letter, the framework needs to support it's own adaptability. 

Hopefully in one of the future SAFe release (maybe SAFe 6), they will decide to include "maneuverability" or the ability of teams to maneuver some of the items they are working on to support the "urgent"
Although SAFe references "avoiding tyranny of the urgent", yet sometimes urgency is the nature of markets and business. If teams can swap out lower priority stories planned in the next incremental delivery and have frameworks that support ability for just in time planning for "urgent" user stories, then are they not being truly agile?

Saturday, June 2, 2012

2012 Ring of Fire Annular Solar Eclipse

We had gone up to Mt Shasta (with SFAA) to view the 210 Annular Solar eclipse. The eclipse was captured in 4 video's. The 3rd of the 4 has the Moon in the Center of the Sun.
Link to 2012 Annular Solar Eclipse on youtube: 2012 Annular Solar Eclipse - Totality
Camcorder used was Canon DC22 with filter from Rainbow Symphony


Pics and a video clips of 2012 Ringof Fire Annular Solar Eclipse are viewable in this blog under page under '2012 Ring of Fire Solar Eclipse'

RFID Reader Configuration Params

According to a major RFID Reader and tag manufacturer there are 128 ways to configure reader-to-tag and tag-to-reader communications. Many readers come some pre-set settings for 4 or 5 of the "best" or "better" combinations.

Here are  some that I look out for when setting up a reader configuration:
  • RSSI: check the farthest distance a tag can be from the antenna, gets its RSSI value and then set the reader to filter out any tags with a lower RSSI value

  • DRM: unless there are more readers using more than the 50 available channels (915MHz is really 902-928) with channel hopping across 50 channels, use Single Reader Mode or Multi Reader Mode if available
  • Note: If using Single Reader Mode (not all readers provide Multi-Reader mode settings), then it becomes an interesting equation of how to avoid tag collisions and avoid missing tag reads due to some persistent values

  • Auto-Start: With auto-start, readers are setup to read either periodically, immediately or based on some input trigger. If using periodic reads, set the periodic reads small enough that some one walking by an antenna with an rfid-tagged asset will be in front of the antenna long enough for a read to take place. I like to use 250ms for directional portals if not using continuous or immediate reads.

  • Singulation and Dual Targets: With Class 1 Gen 2 standards, tags can be in either state A or state B. Sort of like putting your hand down after a roll-call and then leaving it down if the analogy makes sense. So unless I know the last roll call and its results, its best to choose "dual-target" to ensure all tags are read.  Dual target makes sure all tags in both states are read.

  • Channels: In the case of multi-reader environment, since there are 50 available channels to choose from, why start all the readers at channel 1? The possibilities of channel hopping become higher and I like to setup each reader to start on a different channel.

  • Sessions: Sessions are useful in multi-reader environment in that if 3 or 4 readers are working in a multi-reader environment then each reader can be set to interact with tags in a different session. One things I only recently understood was that Sessions greater that 1 (i.e Session 2 or 3), leave their tags in state B indefinitely. So, definitely, use dual target if you require all tags to be read and are using more than one session.
  • Note: This brings up a value to set for tag persistence. Persistence sets how long a tag will be state B or "hands down in the roll call analogy" before it switches back to state A. If using sessions greater than 1, then setting this value may not have the expected effect

  • Picture a cone in front of the antenna: its useful to ne'er forget that an antenna creates a cone which defines its read circumference and area. These days we can fine tune antennas to read from a couple of inches in front of the antenna to a couple of metres away (or farther). Granted periodic stray reads occur and the antenna footprint is never a nice,even oval or circle and dead spots or null are a reality.. however, we can work with these limitations by imagining a read area shaped like a cone in front of the antenna and ensuring tags within this area are always read.

  

I moved my post on some of the params that I find useful when setting up RFID readers and antenna to get better reads into its own page under RFID on this blog.

Tuesday, April 3, 2012

Using Informix 32-bit library with Cognos on 64-bit server

1. Getting and installing the Informix 32-bit ODBC libraries
a. I used connect.3.50.UC6.LINUX.tar for Cognos 10.1/Redhat 5.4. One can get the libraries here:
http://www14.software.ibm.com/webapp/download/search.jsp?rs=ifxic
b) Create a group and user for informix
#/usr/sbin/groupadd informix
#/usr/sbin/useradd -g informix
#password informix
#mkdir /home/informix/connect // copy connect.3.50.UC6.LINUX.tar /home/informix/connect
#cd /home/informix/connect
#tar -xvf connect.3.50.UC6.LINUX.tar
#./installconn // default install is to /opt/IBM/informix
folder
#vi $INFORMIXDIR/etc/sqlhosts //example: ifx_server1 onsoctcp IP service
#vi /etc/services //add the service from sqlhosts with port and
protocol
2. Set up Cognos baadmin profile
#su - baadmin //change to baadmin user
$vi .bashrc
a) set the following environment variables: INFORMIXDIR, INFORMIXSERVER
export INFORMIXDIR=/opt/IBM/informix
export INFORMIXSERVER=server_name

b) add the informix bin folder to the PATH
export PATH=$PATH:$INFORMIXDIR/bin
c) add the informix shared libraries to the path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$INFORMIXDIR/lib:$INFORMIXDIR/lib/esql:$INFORMIXDIR/lib/cli
I also modified the PATH and LD_LIBRARY_PATH variables for Cognos libraries:
export PATH=$PATH:/opt/ibm/cognos/c10_64/bin:/opt/ibm/cognos/c10_64/bin64export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ibm/cognos/c10_64/bin (/opt/ibm/cognos/c10_64/bin contains libcogudaif.so. I also created a soft link in /usr/lib to this
shared library and have not tried to remove it).
d) Stop and Restart all servers
#/etc/init.d cognos10 stop
#/etc/init.d cognos10 start

Don't forget to change /etc/services and add the informix tcp connection info that is in the sqlhosts file
Also odbc.ini and odbcinst.ini are required for odbc connectivity in /etc folder
Also all libodbc*.* files are required in /usr/lib

With the above, I am able to us the Cognos Informix connection to connect to Informix

Wednesday, May 25, 2011

Starting a remote cygwin connection to ibm cloud server

Configuring cygwin to open a remote client to linux (RHEL5.4)
1. Download and install cygwin on your windows box
Required packages include xorg-server, xterm, xauth, openssh and an editor such as vim. Also you can get xclock and xcalculator which are handy utilities
2. Start cygwin from either the windows START button or desktop icon if you opted for an icon to install
3. $startxwin //starts the x server
this should popup an xterm on your windows box
4.
$xhost +127.0.0.1
$xhost +localhost
check the required entries in /etc/hosts
127.0.0.1 localhost
5. $export DISPLAY=localhost:0.0
6. $ssh -X user@host -i key
The above should open a terminal to the remote server with prompt
7. Check X11Forwarding
$echo $DISPLAY
or
$envgrep DISPLAY
DISPLAY=localhost:10.0

That's it! You should be able to start up remote X client software such as cogconfig.

A couple of other troubleshooting pointers. If xterm still won't start or an error displays:
a) under the home directory check ~/.ssh/ssh_config file
Host *
X11Forwarding yes

b)Check the global /etc/ssh/sshd_config
X11Forwarding yes

c) Check the gloabl /etc/ssh/ssh_config
X11Forward yes

Tuesday, September 28, 2010

Creating an Informix Database Server

I used the following steps to recreate Informix Database server and the databases within it in linux/ubuntu. Since I had a dbexport, I could use dbimport with -D option to reimport the tables into the new database

Login as informix or root
1) Modifiy onconfig file
rootdbs, path to rootdbs, DBSERVERNAME, log files and paths to log files

2) Update environment variable to new server
export INFORMIXSERVER=

3) Update sqlhosts file for the new server
If on linux, possibly the /etc/services file may need to change depending on sqlhosts file settings

4) create a path to the new rootdbs
$touch /path_to_rootdbs/rootdbs
$chmod 660 /path_to_rootdbs/rootdbs

5) Create any dbspace you may need if you wish to store databases outside of rootdbs
$onspaces -c -d dbspace -p path_to_dbname -o 0 -s sizeKB

6) if you don't already have a logon,
a) go thru the usual user, group creation and add the user to the appropriate groups with a profile so they can get to dbaccess
b) i don't create the database since I plan to use dbimport from another server

7)Navigate to where database.exp folder is
$dbimport -c newDB -d dbspace
If the appropriate .sql file is in the folder this should run and reimport all the tables

8)You should be able to use dbaccess to access the new db
$dbaccess newDB

Monday, September 27, 2010

Partitioning Informix table data into quarterly and historic info


1) Setup a date column to use for partitioning
The easiest way is to have an update_ts column with the Informix Date format. I use java to update the informix database and since I want to use MM/dd/YYYY as my timestamp, I first code the java to eliminate unnecessary seconds/milliseconds from the current timestamp generated by System.currentTimeMillis()
long ts = System.currentTimeMillis() ;
java.text.DateFormat sdf= new java.text.SimpleDateFormat("MM/dd/yyyy ") ;
String newDateStr = sdf.format(new java.sql.Date(ts));
System.out.println("updated timestamp:"+sdf.format(new java.sql.Date(ts))) ;

2) Fragmenting an existing Informix table with the init clause
alter fragment on table events_tbl init
fragment by expression
partition ptn_qtr1 (update_ts between "01/01/2010" AND "03/31/2010") in dbsp1,
partition ptn_qtr2 (update_ts between "04/01/2010" AND "06/30/2010") in dbsp1,
partition ptn_qtr3 (update_ts between "07/31/2010" AND "09/30/2010") in dbsp1,
partition ptn_qtr4 (update_ts between "10/01/2010" AND "12/31/2010") in dbsp1,
partition ptn_historic(update_ts < "01/01/2010") ;
create index tbl_idx on events_tbl (update_dt) ;

index tbl_idx created

3) A script runs on the last day of every quarter that drops the partition holding the nextQuarterly data
(in the first year this partition will be empty)
Ex: End of Qtr2 and start of Qtr3
alter fragment on table events_tbl
detach partition ptn_qtr3 tbl_ptn_qtr3_tmp

4) Create and attach a new partition to the events_tbl
(The new partition should be a table with the same structure as events_tbl and same index for partitioning)

5a) To re-attach the dropped partition so that it is now in the history partion
The following adds the dropped partition into a new partition
alter fragment on table events_tbl
attach tbl_ptn_qtr3_tmp
as partition ptn_tmp
Note: this adds an extra partition to the table

5b)After re-attaching the old fragment, refragment to bring our fragment down to the original 5 partitions
dates will need to be adjusted to the new quarter/year
drop index tbl_idx ;
alter fragment on table tbl init
fragment by expression
partition ptn_qtr1 (update_ts between "01/01/2010" AND "03/31/2010") in dbsp1,
partition ptn_qtr2 (update_ts between "04/01/2010" AND "06/30/2010") in dbsp1,
partition ptn_qtr3 (update_ts between "07/31/2010" AND "09/30/2010") in dbsp1,
partition ptn_qtr4 (update_ts between "10/01/2010" AND "12/31/2010") in dbsp1,
partition ptn_historic(update_ts < "01/01/2010") ; create index tbl_idx on tbl (update_dt) ;


This effectively reorganizes the data into the 5 partitions and drops the old fragments.

6)Once the historic data is stored, it can be used to feed reports or segragate data for a datacube or datamart

Thursday, September 17, 2009

PostgreSQL PL/pgSQL to Informix SPL migration

PL/pgSQL coding can be quite a different experience coding with Informix SPL. However almost all the procedural functionality is converted in SPL.

1. Variables: In SPL use DEFINE instead of plpgsqls' DECLARE
Most in-build PL/pgSQL variables are supported.
For %ROWTYPE however one has to create a row type datatype
In pgplsql for: DECLARE var tbl.colname%TYPE
in SPL use: DEFINE var like tbl.colname
Assigning variables SPL requires the LET keyword
LET var1 = var2+5 ;

2. User defined functions returning values
SPL uses RETURNING keyword whereas in plpgsql use RETURN

3. Date/Time
I couldn't find a with or without timezone clause in SPL. However if TZ environment variable is set,
then one can call select DBINFO('get_tz') from tablename
plpgsql TIMESTAMP is equivalent to SPL datetime
The data types Interval, Date and Time are the same in both language
To get epoch time in SPL:
select DBINFO('utc_to_datetime',colname) from tablename;
select DBINFO('utc_current') returns the current date as epoch
To convert datetime to epoch is more interesting:
Some developers have blogged to_epoch user defined functions that manually converted datetime to an integer.


4. EXCEPTION Handling
SPL provides the SQLCODE=100 for a successful query but with no rows returned.
This is similar to plpgsqls' IF NOT FOUND clause
I found I had to use a cursor to retrieve the SQLCODE
create function fn()
returning integer;
DEFINE qry varchar(100) ;
LET qry= "select col from table where col='somevalue'" ;
PREPARE stmt from qry ;
DECLARE cursor1 cursor for stmt ;
OPEN cursor1 ;
FETCH cursor1 into found ;
if (SQLCODE = 100) then
raise exception -746,0,"no matching row found";
end if
return SQLCODE ;
end function ;
The -746 is a custom code provided by Informix allowing a developer to return their own created exception

If someone doesn't want to use cursors, another option is after running the select statement, use:
select DBINFO(sqlca.sqlerrd2') into rows_affected from systables where tabid=1 ;
or
LET retcode = DBINFO('sqlca.sqlerrd2') ;
If no rows were returned this will return 0 into rows_affected

6. SEQUENCES
ANSI SQL does not support sequences as DEFAULT col values in CREATE TABLE DDL, although PL/pgSQL allows it.
Once you create a sequence, the next value can be retrieved into an SPL variable and then used in an insert stmt.

7. SQL/RETURNING statement
PL/pgSQL provides a RETURNING clause after an SQL select, insert or update stmt (not ANSI compliant).
In SPL I wrote an SQL select stmt after insert/update SQL to get the value.

8. Testing an SPL routine
One can use dbaccess to create a routine and save it
From query-language->New option then
execute function function_name() ;

The returned values/error messages are displayed

Wednesday, September 16, 2009

Informix Cheat Sheet

Shared Memory stats
$ipcs -m
$onstat -g seg


To quickly get Informix version number:
$onstat -s

Use oninit with -s option before using the onmode -m
This will allow you to check log files and other config params before users can logon onstat -lRdt

Use onmode with -yuk option. This terminates user sessions, rolls back transaction and then goes offline allowing for a quicker startup
$onmode -yuk

The default page size with onspaces is 2K. You can use onmonitor or -k option to change the page size.
$onstat -bB
$oncheck -pr\grep -i 'page size'
$getconf PAGESIZE


# view the number of daemon servers and Flush page processes
$onstat -u
$ps -efgrep oninit

rollbacks, commits, cache, reads/writes etc.
$oncheck -cc
$oncheck -pc
$oncheck -pr


After going fully online, you can run the following to check VPs, Users and Sessions:
$onstat -g glo
$onstat -ug ses session_id


Importing/Exporting database:
Note qhen using dbimport Informix Engine will drop the database and create it new with the output of dbexport.


Other useful dbaccess SQL commands
info tables
info columns for table
info status for table
info access for table


Tuesday, September 15, 2009

PostgreSQL Cheat Sheet

Starting/Stopping Server:
$set PGDATA=/postgres/data
$pg_ctl -w start/restart -D /var/lib/pgsql/data -l log -o --port=5432
$pg_ctl -D /var/lib/pgsql/data stop -m immediate/smart/fast
$pg_ctl -s
$pg_ctl reload

Server Administration (psql)

Some useful commands :
=>select VERSION()
=>select CURRENT_DATABASE()
=>select CURRENT_SCHEMA()
=>select USER
=>select INET_SERVER_PORT()
=>select INET_SERVER_ADDR()
=>vacum analyze verbose tablename
=>explain sql

#TCP/IP settings in var/lib/pgsql/data/pg_hba.conf
host all 127.0.0.1 255.255.255.0 password
To allow remote connections: modify /var/lib/pgsql/data/postgresql.conf
#change
listen_addresses = 'localhost'
# to
listen_addresses = '*'

Forgot root/postgres password?
#Modify pg_hba.conf to all trust connections
$pg_ctl reload
$psql -U postgres
=>alter user postgres with password 'newpassword' ;
=>\q
#Set pg_hba.conf back to orginal settings
$pg_ctl reload

$pg_dump -U user db > tmp/db_dump
Create a database:
$createdb test
or
$psql -U postgres -c "CREATE DATABASE test" -d template1

$psql -U postgres -d test
=>\r ; clear buffer
=>\e ; edit buffer
=>\w ; write buffer
=> \! ; run sub-shell
=>\g or ; ; rerun contents of buffer
=>copy table to file
=>copy table from file
=>set ; to view DBNAME, user, host and port
=>\connect or \c dbname ; no arguments gives current database name
=>\i file.sql ; execute script
=>\e ; to change encoding
=>\o filename ;

send query results to a file
=>\o
=>\d table or \d for all tables
=>\dt or \di or \dv or \dv or \dC
=>\dg ; list of groups
=>\dn ; schemas
=>\dn+ schema
=>\c db username

To view a stored procedure in psql:

=>select prosrc from pg_proc where procname='procedure_name' ;
Some other useful system tables
pg_trigger (triggers)
pg_class (tables)
=>\d tablename ;describes the table

=>select relname from pg_class ; lists all the user tables

To view the a schema with all its tables, try the following:
\o dbschema.txt
=>echo \d `select relname from pg_class`
\o

error messages
undefined symbol: pg_valid_server_encoding_id is often the result of incompatible libraries (usally when upgrading PostgreSQL) usually the libpq.so

$ldd -d path to psql
$export LD_LIBRARY_PATH=path to correct libpq.so
$sudo ldconfig

Thursday, August 27, 2009

Installing Informix 11.5 on Ubuntu

I recommend referencing the IBM Informix website which has a detailed description of the requirements and install process.



Get the pre-requisitie libraries on ubuntu. On a 32-bit server:
$sudo apt-get install libaio1
$sudo apt-get install bc
$sudo apt-get install pdksh



Create group and username
$more /etc/group grep 200
$more /etc/passwd grep 200
$sudo groupadd informix –g 200
$sudo useradd informix –m informix –d /home/informix –g informix –u 200
$sudo passwd informix
$su Informix

#give sudo access to Informix
$sudo adduser Informix admin
$sudo groups Informix
$ cd /home/Informix
$mkdir Informix_install
$cd informix_install



#download the Informix files
#and chown and chgrp to Informix
#untar the file
$sudo ./ids_install
default dir is /opt/IBM/informix
$sudo cp /opt/IBM/Informix/etc/sqlhosts /opt/IBM/Informix/etc/sqlhosts.server
edit sqlhosts.server to create
If you installed demo then use profile settings to set:
INFORMIXDIR=/opt/IBM/informix
SQLHOSTS=sqlhosts.server
INFORMIXSERVER=server


Informix provides a tool 'onmonitor' to manage dbspace

To create a 1GB dbspace 1,024,000KB
$onmonitor

$dbaccess
Use dbaccess to create a database


To access the database other than a username 'informix'
Add the username to the Informix group
Be careful to make sure the other groups the username belongs to are retained
Add the Informix bin (/opt/IBM/informix/bin) to the $PATH

Setup a profilesettings file that gets automatically executed or add to the startup shell script file such as .bashrc
INFORMIXDIR
SQLHOSTS
INFORMIXSERVER
PATH=$PATH:/opt/IBM/informix/bin


$dbaccess
Use the tool to create the database

Monday, August 17, 2009

SQL-92 compatible scripts

For installing and running SQL-92 scripts that can run on both Informix and PostgreSQL:

Two sources for SQL-92 syntax are:
HOWTO - SQL92 website
http://www.ocelot.ca/commands.htm
I found an excellent reference from Apress for PostgreSQL:
Beginning PHP and PostgreSQL 8 by Gilmore and Treat


IF EXISTS in statement
remove any "if exists" statements from the sql 'drop' command
ex. drop view if exists view1 should be changed to:
drop view view1
drop index if exists indx1 should be changed to:
drop index indx1

drop sequence if exists seq1 should be changed to:
drop sequence seq1
drop table if exists tab1 should be changed to:
drop table indx1


PostgreSQL scripts allow:
alter table.. owner ..

The owner clause in 'alter table' is not SQL-92 compatible.

When using and SQL DDL command such as create table..
note that setting a DEFAULT value that references a sequence is not allowed (see Sequences below)


BOOLEAN DATA
boolean data should be 't' for true and 'f' for false

SEQUENCES
Sequences are not defined in SQL-92 and are database specific.

IN PostgreSQL, to create a sequence:
create sequence seq1
start with 1
increment by 1
NO MAXVALUE
NO MINVALUE

cache cache_size


is rewritten for Informix as:
create sequence 'seq1'
start with value
increment by 1
nominvalue
nomaxvalue
cache cache_size

Informix does not allow use of NEXTVAL or CURVAL except in insert, select, delete and update statements;

NEXTVAL cannot be used in DDL.
In PostgreSQL, the setval function is used for setting a sequence value :

select pg_data.setval('seq1', value, true) = select pg_data.setval('seq',value)
sets the sequence to the next value when using NEXTVAL
This is PostgreSQL specific code.

VIEWS
PostgreSQL 8.3 easily integrates 'order by' into views.
In PostgreSQL
create view view1 as
select ..
from..
order by..

For Informix rewrite
create view view1 as
select (..)
from
(select ..

order by.. )

According to sql.org website 'or replace' option with 'create view' command is not sql-92 and Informix does not accept the clause. So instead use:
drop view view ;
CREATE VIEW view [ column [, ...] ] AS SELECT expression [ AS colname ] [, ...]
FROM table [ WHERE condition ]
[ WITH [ CASCADE LOCAL ] CHECK OPTION ]


CONVERTING EPOCH TIME:
To convert a unix timestamp to informix:
select DBINFO('utc_to_datetime', unix_epoch_time)
from..

CONSTRAINTS
The syntax for adding contraints is:
ALTER TABLE tbl ADD CONSTRAINT
PRIMARY/FOREIGN KEY (col1,col2..)
CONSTRAINT constraint_name ;

PostgreSQL OID::bpchar ::regclass
This specification is not required for informix and not SQL-92

Connecting Cognos 8.3 to Informix 11.5 on Linux via ODBC

Download the Informix ClientSDK from IBM's website

On the server side (ubuntu) make sure that a service is created with the IP/port/protocol in the file is specified under SQLHOSTS environment variable
Add the service to /etc/services specifying port/protocol
Start informix
$oninit
from the logon specified in the ODBC connection. No startup error messages should be displayed
Adding the ubuntu logon to the informix group (created at install time) solves quite a few problems.

Installing on Windows Installation of the CSDK is straightforward :

Extract the contents of the downloaded zip file
The installer creates an entry IBM Informix Client-SDK 3.50 under 'All Programs'
From there, it is easy to access Setnet32 to create the ODBC connection

Run setnet32:

Select the Host Information tab and under
Server Information enter the connection parameters :
Database : database
Host name: Host IP
Server: INFORMIXSERVER that is set on the linux server

Protocolname: onsoctcp (or whatever protocol specified for the service in sqlhosts)
Service: port (service specified in /etc/services file or the port number)

Under the Host Information tab:
Current Host: Host IP
Username: ubuntu logon
Password Option: Specify if a password is required
Password: Enter the ubuntu userid password

On the server side:
Update the sqlhosts file with the service name
Edit /etc/services to include the service IP/protocol and port numbers

Test the connection and then Save the connection under a name such as informix115
You can now use the saved connection wherever an ODBC connection is allowed