Monday, September 5, 2016

Anatomy of a User Story

User Story vs Use Case
One of the big differences between a Use Case and a User Story is that the value of the new functionality to the Customer is specified in a User Story.

User Story and QA
When is this User Story done? User Story also change the world of QA and Testing as we know it. Acceptance criteria are specified with the User Story and are embedded when the User Story is detailed out.

Saturday, June 2, 2012

2012 Ring of Fire Annular Solar Eclipse

Link to 2012 Annular Solar Eclipse on youtube: 2012 Annular Solar Eclipse - Totality
Camcorder used was Canon DC22 with filter from Rainbow Symphony


Pics and a video clips of 2012 Ringof Fire Annular Solar Eclipse are on their page under '2012 Ring of Fire Solar Eclipse'

RFID Reader Configuration Params

According to a major RFID Reader and tag manufacturer there are 128 ways to configure reader-to-tag and tag-to-reader communications. Many readers come some pre-set settings for 4 or 5 of the "best" or "better" combinations.

Here are  some that I look out for when setting up a reader configuration:
  • RSSI: check the farthest distance a tag can be from the antenna, gets its RSSI value and then set the reader to filter out any tags with a lower RSSI value

  • DRM: unless there are more readers using more than the 50 available channels (915MHz is really 902-928) with channel hopping across 50 channels, use Single Reader Mode or Multi Reader Mode if available
  • Note: If using Single Reader Mode (not all readers provide Multi-Reader mode settings), then it becomes an interesting equation of how to avoid tag collisions and avoid missing tag reads due to some persistent values

  • Auto-Start: With auto-start, readers are setup to read either periodically, immediately or based on some input trigger. If using periodic reads, set the periodic reads small enough that some one walking by an antenna with an rfid-tagged asset will be in front of the antenna long enough for a read to take place. I like to use 250ms for directional portals if not using continuous or immediate reads.

  • Singulation and Dual Targets: With Class 1 Gen 2 standards, tags can be in either state A or state B. Sort of like putting your hand down after a roll-call and then leaving it down if the analogy makes sense. So unless I know the last roll call and its results, its best to choose "dual-target" to ensure all tags are read.  Dual target makes sure all tags in both states are read.

  • Channels: In the case of multi-reader environment, since there are 50 available channels to choose from, why start all the readers at channel 1? The possibilities of channel hopping become higher and I like to setup each reader to start on a different channel.

  • Sessions: Sessions are useful in multi-reader environment in that if 3 or 4 readers are working in a multi-reader environment then each reader can be set to interact with tags in a different session. One things I only recently understood was that Sessions greater that 1 (i.e Session 2 or 3), leave their tags in state B indefinitely. So, definitely, use dual target if you require all tags to be read and are using more than one session.
  • Note: This brings up a value to set for tag persistence. Persistence sets how long a tag will be state B or "hands down in the roll call analogy" before it switches back to state A. If using sessions greater than 1, then setting this value may not have the expected effect

  • Picture a cone in front of the antenna: its useful to ne'er forget that an antenna creates a cone which defines its read circumference and area. These days we can fine tune antennas to read from a couple of inches in front of the antenna to a couple of metres away (or farther). Granted periodic stray reads occur and the antenna footprint is never a nice,even oval or circle and dead spots or null are a reality.. however, we can work with these limitations by imagining a read area shaped like a cone in front of the antenna and ensuring tags within this area are always read.

  

I moved my post on some of the params that I find useful when setting up RFID readers and antenna to get better reads into its own page under RFID on this blog.

Tuesday, April 3, 2012

Using Informix 32-bit library with Cognos on 64-bit server

1. Getting and installing the Informix 32-bit ODBC libraries
a. I used connect.3.50.UC6.LINUX.tar for Cognos 10.1/Redhat 5.4. One can get the libraries here:
http://www14.software.ibm.com/webapp/download/search.jsp?rs=ifxic
b) Create a group and user for informix
#/usr/sbin/groupadd informix
#/usr/sbin/useradd -g informix
#password informix
#mkdir /home/informix/connect // copy connect.3.50.UC6.LINUX.tar /home/informix/connect
#cd /home/informix/connect
#tar -xvf connect.3.50.UC6.LINUX.tar
#./installconn // default install is to /opt/IBM/informix
folder
#vi $INFORMIXDIR/etc/sqlhosts //example: ifx_server1 onsoctcp IP service
#vi /etc/services //add the service from sqlhosts with port and
protocol
2. Set up Cognos baadmin profile
#su - baadmin //change to baadmin user
$vi .bashrc
a) set the following environment variables: INFORMIXDIR, INFORMIXSERVER
export INFORMIXDIR=/opt/IBM/informix
export INFORMIXSERVER=server_name

b) add the informix bin folder to the PATH
export PATH=$PATH:$INFORMIXDIR/bin
c) add the informix shared libraries to the path
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$INFORMIXDIR/lib:$INFORMIXDIR/lib/esql:$INFORMIXDIR/lib/cli
I also modified the PATH and LD_LIBRARY_PATH variables for Cognos libraries:
export PATH=$PATH:/opt/ibm/cognos/c10_64/bin:/opt/ibm/cognos/c10_64/bin64export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/ibm/cognos/c10_64/bin (/opt/ibm/cognos/c10_64/bin contains libcogudaif.so. I also created a soft link in /usr/lib to this
shared library and have not tried to remove it).
d) Stop and Restart all servers
#/etc/init.d cognos10 stop
#/etc/init.d cognos10 start

Don't forget to change /etc/services and add the informix tcp connection info that is in the sqlhosts file
Also odbc.ini and odbcinst.ini are required for odbc connectivity in /etc folder
Also all libodbc*.* files are required in /usr/lib

With the above, I am able to us the Cognos Informix connection to connect to Informix

Wednesday, May 25, 2011

Starting a remote cygwin connection to ibm cloud server

Configuring cygwin to open a remote client to linux (RHEL5.4)
1. Download and install cygwin on your windows box
Required packages include xorg-server, xterm, xauth, openssh and an editor such as vim. Also you can get xclock and xcalculator which are handy utilities
2. Start cygwin from either the windows START button or desktop icon if you opted for an icon to install
3. $startxwin //starts the x server
this should popup an xterm on your windows box
4.
$xhost +127.0.0.1
$xhost +localhost
check the required entries in /etc/hosts
127.0.0.1 localhost
5. $export DISPLAY=localhost:0.0
6. $ssh -X user@host -i key
The above should open a terminal to the remote server with prompt
7. Check X11Forwarding
$echo $DISPLAY
or
$envgrep DISPLAY
DISPLAY=localhost:10.0

That's it! You should be able to start up remote X client software such as cogconfig.

A couple of other troubleshooting pointers. If xterm still won't start or an error displays:
a) under the home directory check ~/.ssh/ssh_config file
Host *
X11Forwarding yes

b)Check the global /etc/ssh/sshd_config
X11Forwarding yes

c) Check the gloabl /etc/ssh/ssh_config
X11Forward yes

Tuesday, September 28, 2010

Creating an Informix Database Server

I used the following steps to recreate Informix Database server and the databases within it in linux/ubuntu. Since I had a dbexport, I could use dbimport with -D option to reimport the tables into the new database

Login as informix or root
1) Modifiy onconfig file
rootdbs, path to rootdbs, DBSERVERNAME, log files and paths to log files

2) Update environment variable to new server
export INFORMIXSERVER=

3) Update sqlhosts file for the new server
If on linux, possibly the /etc/services file may need to change depending on sqlhosts file settings

4) create a path to the new rootdbs
$touch /path_to_rootdbs/rootdbs
$chmod 660 /path_to_rootdbs/rootdbs

5) Create any dbspace you may need if you wish to store databases outside of rootdbs
$onspaces -c -d dbspace -p path_to_dbname -o 0 -s sizeKB

6) if you don't already have a logon,
a) go thru the usual user, group creation and add the user to the appropriate groups with a profile so they can get to dbaccess
b) i don't create the database since I plan to use dbimport from another server

7)Navigate to where database.exp folder is
$dbimport -c newDB -d dbspace
If the appropriate .sql file is in the folder this should run and reimport all the tables

8)You should be able to use dbaccess to access the new db
$dbaccess newDB

Monday, September 27, 2010

Partitioning Informix table data into quarterly and historic info


1) Setup a date column to use for partitioning
The easiest way is to have an update_ts column with the Informix Date format. I use java to update the informix database and since I want to use MM/dd/YYYY as my timestamp, I first code the java to eliminate unnecessary seconds/milliseconds from the current timestamp generated by System.currentTimeMillis()
long ts = System.currentTimeMillis() ;
java.text.DateFormat sdf= new java.text.SimpleDateFormat("MM/dd/yyyy ") ;
String newDateStr = sdf.format(new java.sql.Date(ts));
System.out.println("updated timestamp:"+sdf.format(new java.sql.Date(ts))) ;

2) Fragmenting an existing Informix table with the init clause
alter fragment on table events_tbl init
fragment by expression
partition ptn_qtr1 (update_ts between "01/01/2010" AND "03/31/2010") in dbsp1,
partition ptn_qtr2 (update_ts between "04/01/2010" AND "06/30/2010") in dbsp1,
partition ptn_qtr3 (update_ts between "07/31/2010" AND "09/30/2010") in dbsp1,
partition ptn_qtr4 (update_ts between "10/01/2010" AND "12/31/2010") in dbsp1,
partition ptn_historic(update_ts < "01/01/2010") ;
create index tbl_idx on events_tbl (update_dt) ;

index tbl_idx created

3) A script runs on the last day of every quarter that drops the partition holding the nextQuarterly data
(in the first year this partition will be empty)
Ex: End of Qtr2 and start of Qtr3
alter fragment on table events_tbl
detach partition ptn_qtr3 tbl_ptn_qtr3_tmp

4) Create and attach a new partition to the events_tbl
(The new partition should be a table with the same structure as events_tbl and same index for partitioning)

5a) To re-attach the dropped partition so that it is now in the history partion
The following adds the dropped partition into a new partition
alter fragment on table events_tbl
attach tbl_ptn_qtr3_tmp
as partition ptn_tmp
Note: this adds an extra partition to the table

5b)After re-attaching the old fragment, refragment to bring our fragment down to the original 5 partitions
dates will need to be adjusted to the new quarter/year
drop index tbl_idx ;
alter fragment on table tbl init
fragment by expression
partition ptn_qtr1 (update_ts between "01/01/2010" AND "03/31/2010") in dbsp1,
partition ptn_qtr2 (update_ts between "04/01/2010" AND "06/30/2010") in dbsp1,
partition ptn_qtr3 (update_ts between "07/31/2010" AND "09/30/2010") in dbsp1,
partition ptn_qtr4 (update_ts between "10/01/2010" AND "12/31/2010") in dbsp1,
partition ptn_historic(update_ts < "01/01/2010") ; create index tbl_idx on tbl (update_dt) ;


This effectively reorganizes the data into the 5 partitions and drops the old fragments.

6)Once the historic data is stored, it can be used to feed reports or segragate data for a datacube or datamart