Ross Systems International Ltd. home up ABOUT US SERVICES PRODUCTS CONTACT US

Passionate About Tandem

  Ross Systems International Limited

                    Tandem/HP NonStopTM Development & Test Solutions


Mistley Telegram

Archive Issue 2009

Back Issues:    2007    2008

December 9th, 2009


Well, the 2009 BIG SIG was supposed to be our celebration for both for Ross Systems International and their Managing Director, Rupert Stanley because we were founded in 1989 20 years ago during which we have designed and developed products for some major European institutions which have benefited from our cost saving, high performance solutions and 30 years ago when Tandem was only 4 years old in 1979, Rupert Stanley started working in Düsseldorf on one of the first Tandem NonStop 1 Systems in Europe.

However, it did not work out like that owing to a personal disaster we arrived more than half way through the day when most of the early bird delegates had flown and most people were interested in lunch and winding up the day.
Oh Dear!

We had even designed a new logo for ourselves to emphasis our long partnership with HP:






1979 -


- 2009


No matter, for us it has been 30 years, and in all that time the core attributes of linear expandability and reliability have remained through the evolution of the platform from CISC to Blades through RISC, S-Series and Integrity (RSI), a powerful statement of stability and continuity, consciously made by the owners of the NonStop brand and essential for the continued trust and support of its user community.

Customers requirements for Cost Saving, Reliability and Performance have always been in the forefront of Tandem, Compaq and HP’s mind and they have made us partners very aware of these goals. One of the results of this is the development of a number of ideas such as Rainbow, Alpha and UNIX Integrity, which although they did not survive in their original forms were very important in the development of the Guardian/OSS NSK Systems of today, the introduction of the Itanium chip and the extra functionality required in order to give the OSS personality of the current systems the capability of surviving in the demanding environment of the financial services and Telco industries.

As HP partners we are also proud to share their philosophy of providing Reliable, cost Saving and Innovative solutions  which we will be show casing at the BITUG BIG SIG in London.

In short our products are aimed at:

  1. Reducing Systems Management Costs
  2. Improving System Manageability
  3. Reducing Systems Development Costs
  4. Improving the Ease of Development  and Quality of Solutions

To do this we lever our many years of Tandem / HP NonStop experience which gives us a practical knowledge of the aims and methods of both system manages and developers, an insight into the problems faced by them and an appreciation of the products, tools and utilities which are needed to reduce costs, improve quality and lower the overall effort in maintaining and developing high performance transaction processing systems.

Thus we offer both products and professional services to our client:

PRODUCTS                          .

TELOS           A  C++ multithreaded development application framework.
To reduces the cost of developing multithreaded applications

RSITEST        Test tools suite, including PIRA language and User Programming Hooks
This reactive tool with artificial intelligent capability provides the ability to both reduce test times and costs by automatically generating test sets and analyzing the test results

R-IPPS          IP Prototyping and Test Tool Suite to speed IP Application Development

HSSPOOL      This is a high speed data capture and analysis tool which is specifically aimed at the capture and analysis of real time data, with special message formats to allow for the dumping and data analysis from production processes.

HSEMM          HSM Emulator Suite, this is aimed at reducing the time to market and improving the development efficiency, with all the associated cost savings of HSM based products.

RSI LICENSE   Program Licensing Suite. This product allows for a much tighter and at the same time flexible control of the execution of programs.

FINFO            This Guardian File Information Display reduces the costs of producing management reports by allowing a detailed query of file system combined with a large variety of report formats.

PROFESSIONAL SERVICES                                  .


bullet Consultancy

We at Ross Systems International have extensive know-how on cryptography and the HP NonStop platform. Cryptography implemented in HSMs and smart cards is essential for securing banking applications and whereas there are many resources available on the Windows and UNIX platforms very few companies are capable of matching our combined experience of cryptography and the NonStop Platform.

bulletCustom Development

For individual requirements which are not covered by our product set, we can provide custom built solutions. Because we used component based technology using both object and procedural modelling tools you may be surprised how quickly we are able to deliver solutions.

Our motto Intelligent Technical Innovation remains the same, we always try to find the right solution for our clients requirements at the right price, using technology to its best effect so that the goals of functionality, reliability, user friendliness, maintainability and expandability are met. 

October 15th, 2009


The speed test and processor information utility called PROBE is now available for free download.

The get it hit the PROBE link and then upload it in binary to your machine and then convert the file code to 100. It will work on the entire NonStop estate, D Operating System upwards.

As well as speed information, measured against a K2000 it gives the Processor Type, Model and Name as produced from PROCESSOR_GETINFOLIST_ something which is not always forthcoming from HP.

For instance the Blade Processor NS50000 has a Type 10, Model 71 and Name NSE-M and runs at 22 times the speed of a K2000.

I would also be very grateful to know how your kit performs, you can slice out the sensitive information, but what I would like to know is what processor you have, the Processor Type line and the speed data. I will then be able to compile a list of processing speed for the various machines.

Note this is a raw processor speed and depending how you are using the I/O channels your actual comparative speed will probably be different, but it gives a good indication.

FINFO Acquires New Functionality.

I have recently been in discussions with another prospect for our FINFO product as a result of which I realised that people look at their files estates from different perspectives.

The three which I already had were:

  1. General. To look at what of the file estate was going where.

  2. Date/Time. To look at what the activity on the files was.

  3. Excel Export. To enable the file estate descriptors to be exported into EXCEL for further analysis

and I though that was about it, well I was wrong and there is another

  1. File Usage. i.e. How full the individual files were getting and how deep the indices were, which affect file access time.

The prospect also wanted to look at SQL files, which FINFO can already do, but is undocumented and select on partitioned files, which FINFO can do and is documented.

The end result was that FINFO was upgraded in the following manner:

New Display using command line parameter -E

The standard output so that the size and pages details were removed and the max extents, index level and percent usage were inserted.

FINFO V3.1  Native 27/09/2009 17:05


Copyright Ross Systems International Ltd. 2008,2009


Full Version (Release Date 22nd September 2009)




Name     Last Modified         Code TP RWEP User No  PExt  SExt  MExt IxL   PC%

PDTERROR 26-Oct-1995 22:01:31     0 K  NUNC 255,255    16     8    16   1  19.1

PDTHELP  26-Oct-1995 22:05:58     0 K  NUNC 255,255   500    32    16   1  24.9

TEMPLIMF 26-Feb-1996 18:44:39     0 K  NUNC 255,255    16     8    16   1   8.8


Selected User Totals for SubVolume \SIRIUS.$SYSTEM.PDTSYS

User No  User Name            Files            Bytes Used      Pages Used

255,255  SUPER.SUPER              3               577,536             548

Totals:                           3               577,536             548

Note. MExt = Maximum Number of Extents, IxL = Current Index Level Used, PC% = Percent usage. Everything else is as described by the manual.

Alteration of EXCEL File Display -X

The Max Extents, Index Level and Percent Usage were added.

Release of SQL Selection Parameters -SQx

A command line selection parameter of -SQx was added were x can be one of:

-SQL    Select all SQL tables indices...

-SQT    Select SQL tables

-SQI    Select SQL Indices

-SQP    Select SQL protection View Files

-SQS    Select SQL shorthand View Files

FINFO V3.1  Native 27/09/2009 17:07


Copyright Ross Systems International Ltd. 2008,2009


Full Version (Release Date 22nd September 2009)




Name     Last Modified         Code TP RWEP User No  PExt  SExt  MExt IxL   PC%

BASETABS 06-Jan-2005 11:30:07  572A K  NNNN 127,001    16   128   940   1   0.0

COLUMNS  06-Jan-2005 11:29:59  573A K  NNNN 127,001    16   128   940   1   0.0

COMMENTS 04-Oct-2004 14:44:54  574A K  NNNN 127,001    16   128   940   0   0.0

CONSTRNT 06-Jan-2005 11:30:07  575A K  NNNN 127,001    16   128   940   1   0.0

CPRLSRCE 04-Oct-2004 14:44:59  587A K  NNNN 127,001    16   128   940   0   0.0

CPRULES  04-Oct-2004 14:44:59  586A K  NNNN 127,001    16   128   940   0   0.0

FILES    06-Jan-2005 11:29:59  576A K  NNNN 127,001    16   128   940   1   0.0

INDEXES  06-Jan-2005 12:05:32  577A K  NNNN 127,001    16   128   940   1   0.0

IXINDE01 06-Jan-2005 11:30:00  577A K  NNNN 127,001    16    64   940   1   0.0

IXPART01 04-Oct-2004 14:45:08  579A K  NNNN 127,001    16    64   940   0   0.0

IXPROG01 31-Jan-2005 17:44:28  580A K  NNNN 127,001    16    64   940   1   0.0

IXTABL01 06-Jan-2005 11:29:59  581A K  NNNN 127,001    16    64   940   1   0.0

IXUSAG01 31-Jan-2005 17:44:28  583A K  NNNN 127,001    16    64   940   1   0.0

KEYS     06-Jan-2005 11:30:00  578A K  NNNN 127,001    16   128   940   1   0.0

PARTNS   04-Oct-2004 14:44:56  579A K  NNNN 127,001    16   128   940   0   0.0

PROGRAMS 31-Jan-2005 17:44:28  580A K  NNNN 127,001    16   128   940   1   0.0

TABLES   06-Jan-2005 12:05:32  581A K  NNNN 127,001    16   128   940   1   0.0

TRANSIDS 31-Jan-2005 17:44:28  582A K  NNNN 127,001     1     1   940   0   0.0

USAGES   31-Jan-2005 17:44:28  583A K  NNNN 127,001    16   128   940   1   0.0

VERSIONS 04-Oct-2004 14:45:00  584A K  NNNN 127,001     1     1   940   1   0.1

VIEWS    04-Oct-2004 14:44:57  585A K  NNNN 127,001    16   128   940   0   0.0


Selected User Totals for SubVolume \SIRIUS.$WORK.CL00PCAT

User No  User Name            Files            Bytes Used      Pages Used

127,001  RSI.RUPERT              21               247,296             338

Totals:                          21               247,296             338

Selection on Index Levels -SI

The existing -SI command was altered to -SR Select Restricted ProgId'd Set Files

The -SI Command was then used for Select on Index Levels. -SIx where x is the minimum Index Level.

$WORK NATIVE 6> finfo31 zspidef.* -si1 -e
FINFO V3.1 Native 15/10/2009 18:05
Copyright Ross Systems International Ltd. 2008,2009

Full Version (Release Date 22nd September 2009)

Name     Last Modified        Code TP RWEP User No PExt SExt MExt IxL  PC%
ZMP00010 04-Feb-2005 12:27:13  961 K  NCNC 255,255    2    2  900   2   1.1
ZMP00011 04-Feb-2005 12:27:14  961 K  NCNC 255,255    2    2  900   2   1.0
ZPHIFI   04-Feb-2005 12:27:16  961 K  NCNC 255,255    2    2  900   2   1.1

Selected User Totals for SubVolume \SIRIUS.$WORK.ZSPIDEF
User No  User Name   Files         Bytes Used      Pages Used
255,255  SUPER.SUPER     3            116,736              60
Totals:                  3            116,736              60

Selection on Percent Usage -S%

A new -S% Command command has be used to select files on percent usage.

-SPx where x is the minimum percent usage.

$WORK NATIVE 8> finfo31 -s%30 -e
FINFO V3.1 Native 15/10/2009 18:16
Copyright Ross Systems International Ltd. 2008,2009

Full Version (Release Date 22nd September 2009)

Name     Last Modified        Code TP RWEP User No PExt SExt MExt IxL  PC%
FINFO16  16-Aug-2009 11:59:41  100 U  NNNN 127,001   36   16   23   0 43.0
FINFO17  17-Aug-2009 16:52:48  100 U  NNNN 127,001   36   16   23   0 43.1
FINFOCN4 27-Sep-2009 14:24:11  101 U  NNNN 127,001   16   16   16   0 31.5
FINFONC  15-Oct-2009 15:05:26  101 U  NNNN 127,001   16   16   16   0 33.1
FINFONC3 22-Sep-2009 12:26:39  101 U  NNNN 127,001   16   16   16   0 31.2
PROBE    14-Oct-2009 17:18:39  100 U  NNNN 127,001   10    6   16   0 44.6

Selected User Totals for SubVolume \SIRIUS.$WORK.NATIVE
User No User Name    Files         Bytes Used      Pages Used
127,001 RSI.RUPERT       6          1,277,914             678
Totals:                  6          1,277,914             678

HELP Menu -H

The help menu was of course updated to include the new functionality

I have still got to update the manuals and finish the Beta testing of the product but if you would like to give it a whirl contact me and I will be able to provide you with a trial license.

Also, if you have any ideas for improvement, let me know.

Licensing Product Update.

RSI License has been updated to allow for the Blade System NS50000.

The saga of this update was interesting because HP refused to give me the Type, Model and Name of the new Blade processors, which was why I wrote the PROBE program and with the help of a friendly blade user the program gave me all the information I needed for licensing it.

Which is why you now have access to the PROBE program.

September 17th, 2009

The Security Nightmare.

"Raffiniert ist der Herr Gott, aber boshaft ist er nicht." ("The Lord God is Subtle, but he is not malicious") said Albert Eistein in 1921 when Miller said they he had found evidence for an aether wind which would have seriously compromised Einstein's theory of special relativity. Einstein's general approach was that the Universe was designed in a complex but logical way and all that one had to do was to look at it in the right way using mathematics as a modelling tool.

However, this simple approach in mathematics soon breaks down when we start looking under the bonnet. For instance, elliptical and circular functions. In particular we rapidly encounter PI (3.14159.....) which is built into the fine structure of the universe in so many different ways, not only with regard to rotating objects but also into the understanding of wave theory, which is pretty fundamental to understanding the way in which the Universe clicks.

Surely such an indeterminate number scarcely reflects a well ordered system. Obviously in a well ordered system this number should be one or two, say. For the Egyptians it was, one turn of the wheel equals PI, simple! The problem is that the two numbers do not appear to be on the same scale, and we work to that scale. Maybe that is our weakness and what a weakness it is. We have designed a system of mathematics which is based on the in, out, up, down and along. It could be that this is all wrong. It works here and now in the universe as we know it, but it comes apart in the real universe at all times, so PI = 3.14159.... a completely crazy number for something which is utterly fundamental. BUT what has this to do with security?

The fact is that the entire mathematics of asymmetric cryptography is based on circular/parabolic and polynomial functions of one form or another and the security of these is founded on the premise that the solutions to the equations are so difficult to find that no one will do so in any sort of reasonable time. It is a bit like Fermat's Last Theorem, which says that there are no integer x, y and z where xa = ya + za for all integer values of a > 2. Has it never occurred to anyone that where a = 2 we are looking at planar figures, whereas a > 2 we are looking at 3d and higher dimensional objects. This is the same domain which mad constants like PI inhabit and is symptomatic of our faulty way of understanding the fundamental mathematics of our Universe. This is why it took Gerhard Frey so long to come out with his massive and tedious solution, based on what? Elliptic Curves! and horror of horrors Modular Arithmetic!

"Raffiniert ist der Herr Gott, aber boshaft ist er nicht." The malice is all of our making and our inability to see the simple solution to Fermat's Last Theorem. it will be elegant, it will be simple, it will be devastating. At a single blow one of the principle struts which hold our wired world securely together will be destroyed, because the fog of our ignorance will be blown away by the enlightenment which comes out of understanding how the universe is constructed as a dynamic fluid structure in which Pi has a value of... Well that would be telling wouldn't it!

Oh! and, I forgot to mention, Fermat's last theorem probably isn't true.
The secret lies in discovering the conditions under which it breaks down.
It's as simple as 1, 2, 3...

Enough clues! Down to Programming!

August 17th, 2009

Beta Testing and Product Release.

There is an accepted wisdom in product development that it should progress in the order:

  1. Specify Product

  2. Design

  3. Write Test Specifications

  4. Write Code

  5. Module Test

  6. Integrate and Test Product

  7. Write User Manuals

  8. Beta Test / Fix Last Minute Bugs

  9. Release Product

or something like that and this procedure should be carried out not only for the major product releases but also the minor ones as well.

Beta testing is especially interesting because however well you try to test a product in house there will always be some small glitch out there in the big wide world which will hit your product with a possible major impact. This happens because there is a huge variation of system configurations and unless you have had a chance to test and install on a good range of them then it is impossible guarantee that our product will run on all of them and that is where beta testers come into their own.

The deal is you give them licenses for something useful and then they use and run the product on their system getting the benefits for free and you the producer are able to verify that the product will run on their system and hence all systems like that.

It however has the downside that if the product does behave in an anomalous manner then unless it is pretty obvious then you will be ripping your hair out trying to reproduce the error observed on a system which normally runs the product perfectly. This is where debug code, good will and persistence come in. Trying to imagine the path taken through code by a system not obeying the rules is not an easy task and it may take a few tries to get it all right and the only way that you will achieve this is through the good will of your beta tests who you must nurture and also if possible going on site to step through the code.

It is also very important to document what happened and when so you can pull all the threads together for the final regression test, including the new tests which need to be done to prove that the system can handle the problems found by your beta testers so that the final product release will be as smooth as possible.

This test set can then be incorporated into the standard test set for the product so that you never have to fight the same problem twice.

At this point I would like to give a huge thank you to all our beta testers who enable us to guarantee the quality and reliability of our present and future products.

June 30th, 2009

The Way We Are.

For a long time I have been considering the biggest machine ever likely to be constructed.

In so far as we know it was made over 8 billion years ago and is now about 16 billion light years in diameter and is still up and running. The fact that is now about 50 billion light years round may explain many strange facts which we are informed of by the astrophysicists, such as the finding that the entire universe as we know it appears to be curved into a saucer shape and with the curvature of space time being what it is we should be able to see the back of our heads but we can't.

It should be obvious to anyone who looks that this entire construction is a self referential machine which in fact says I am because I am ad infinitum, literally. In a very similar way in which our consciousnesses work but at in infinitely finer and faster pace. Which in turn gives rise to the question of whether the universe has a consciousness itself and seeing how things are the number of other self referential systems possible. In fact, I think it entirely possible that the number of possible self referential systems exceeds by many orders the number of deterministic systems.

This of course has major implications for computer science since we now appear to have got stuck in the rut of considering deterministic systems only. However, if we really want to generate a system with human intelligence then we need to be looking very seriously at the area of self referential systems, which in its turn opens up the question of do we really want to build sentient machines. In a way I think that the answer is inevitable because once we have built a sufficient number of different systems someone is bound by accident to build a fully self referential system which is capable of modifying itself using the inputs it received.

Anyway I digress. Except for the fact that many people have been thinking for a long time of quantum machines, which is all well and good except for a little fact called Heisenberg's uncertainty principle, which states in no uncertain way that it is impossible to have a 100% deterministic machine at the quantum level. In fact the only way which we manage to build deterministic machines is by building them big so that the unpredictable quantum effects are averaged out by the majority voting of a large number of atoms. A fact which the modern chip builders are well aware since they are now down to building circuits with individual switches containing only a few thousand atoms and in which the majority voting principle no longer applies and which they now have to get round by performing parallel computation and voting, as in the itanium chip. This is course not the only way of doing the same task, in the biological world nano-technology works by using a lockstep principle, for instance in the transcription of DNA, by only releasing the constructed subcomponent when it is in the right configuration by means of the polar and bonding configuration in the section of the enzyme doing the synthesis. The implication of this is that lockstep processing is here to stay at the nano-technological level. Whoops HP, who's to say you did not have it right in the first place.

Taking this to the lowest level of interpretation, the quantum level, it is impossible to see how the universe does not disintegrate spontaneously. The uncertainty principle shows that it cannot be determined when anything is anywhere at any different time. So the glue sticking the universe together is dissolving all the time and here is the clue to what is really happening. It is true that this disintegration is happening however it does not happen in all dimensions at the same time and the remaining dimensions reintegrate those that which have disintegrated before collapsing themselves. This allows a fluid universe in which change can happen.

However, it is a multidimensional universe in which space is curved and as it grows outwards along time, the geometry generates a calculus which is both deterministic and uncertain. It is also a calculus which enables the creation of an entire universe from a quantum seed and all the matter in it, since matter is merely a swirling of the quantum matrix caused by inequalities in the primeval quantum flux in the first few hours/centuries of the universe. This also explains the enigma of why the early universe did not fall into a black hole of its own making at the beginning, because if the matter is being born outwards at nearly the speed of light as it is being created. There is little possibility of it falling in on itself.

An interesting corollary of this is that if we manage to create quantum computers we will do so because we understand the way in which the universe works which will automatically give rise to a whole new technology much loved by sci-fi aficionados of force fields, hyper-drives, jumps and whatever have you.

Maybe Asimov was not so wrong when he talked about the positronic brain, but be careful little men. This is a Pandora's box and once you open it you will never be able to close it again.

Programming in C++, C and TAL on NonStop Computers is much safer, a lot of fun and will certainly never turn that particular key.

4th July coming up again. Have Fun! 

(Copyright Rupert Stanley. Ross Systems International Ltd. 2009)                     

April 14th, 2009

Itanium Migration - Execution Rates

Recently I have had cause to revisit the course given by Bert van Es in May 2007 on HP Integrity NonStop Application Migration which contained a hand written note in the margin on the page dealing with the comparative execution rates of the various execution modes.

These notes read as follows:

Mode Speed Up
Accelerated TNS mode 5 times faster than CISC interpreted
TNS/E mode 7 times faster than CISC interpreted

Which are very impressive speed ups.

However, I have just written some very CPU intensive code, cryptography of course, and I wondered what the changes in execution rates would be when the system was being pushed to the full.

Now, although I only have a K2000, it does contain a MIPS R Series processor running at 125 MHz. So any tests which I did would be applicable across the board for all Himalaya and S-Series Processors and also interestingly enough would give some insight into the performance improvement expected with Integrity.

In order to do the tests I placed timing code into the HSEMM module with the ability to switch it on and off by means of console commands and wrote the test scripts for some PIN translate commands.

I then compiled three versions of the HSEMM program.

  1. Full CISC interpreted mode, compiled with C

  2. Accelerated version of 1. using AXCEL

  3. Native Mode, compiled using nmc

I then ran the the standard test in timed mode using these three versions.
Note. I also traced them to check for correct operation before switching to timed mode..

The table below shows the average results found:

Mode Transaction time Speed Up\CISC
CISC Interpreted 90ms 1
Accelerated 5ms 18 times
Native 2.3ms 39 times

The often quoted "Your distance may vary" by John Freeland comes to mind.

In fact, in certain circumstances, not only does the distance vary but so much so that an application running in native mode will be over the hills and far away in comparison to the same one in interpreted mode which is scarcely off the starting blocks.

It all depends on the processing mix in an application. For instance an application which is performing a sophisticated SQL query will spend most of its time in the operating system where it will be running native anyway. Whereas an application which is doing complicated multithreaded despatching and cryptography will spend virtually all its time within the process and that's where the greatest speedup can be expected.

Another major area is portability. The Native C and C++ compilers are much stricter than the interpretive compilers. This means that the code in inherently more portable, but you will have to work at firming up your code, typecasting and checking boundaries. Also with Integrity the maximum addressing mode is 64 bit, this may cause problems with boundaries and fields containing addresses, so you will need to think about the compilation addressing mode.

However, you can not go any faster than native, so if you need the extra cycles:

  1. Look at your code

  2. Use measure find out which are your most CPU intensive applications

  3. Optionally Accelerate them to give quick relief
    Use OCA on Integrity, otherwise use AXCEL for RISC

  4. Recompile them native, with appropriate fixes.

The results may astonish you.

March 7th, 2009

Thoughts on Intelligent Systems

Last week I had reason to look at my bio again owing to comments from Spring, which started me musing on what we hoped to achieve in the 1960's compared with what we actually have done.

Looking backwards over the last 40 years of computing, from this standpoint, I am distinctly under-whelmed.

In those days there was a definite possibility that we would achieve conscious and highly intelligent machines by 2001 capable of not only assisting mankind but also driving back the intellectual bounds of our knowledge in partnership.

What we have in fact achieved is a very fast but stupid bucket of bolts. Oh yes, we can store more information than we ever thought possible before, the CPU clocking speed is lightning fast. we can use the device to access the combined (mis-)information of the world, communicate with one another at light speed and have a facility which can act as a typewriter, calculator, television, games machine and much more, all in one. However, the bottom line is that the best that has been achieved by the computer, as most people know it, is the intelligence of a spider. Which is not very impressive, seeing the multibillion software development industry.

But, what if we had developed this conscious, intelligent and sentient device?

The track of thinking at this point divides into two and these two tracks are about how such a device is to develop its experiences and knowledge and how it would be viewed in our society.

Track 1 is based on the premise that such a device would come into existence by being taught principally by its users in combination with internet access. Such a system has the advantage that it would be very much in tune with its owners. On the other hand, it would acquire all the prejudices and mis-conceptions associated with its user group, which might make it neither very wise or well considered. It is terrifying to think how a device might be used by a terrorist group. Think what could happen if a device was programmed in this way and then cloned before being sent out on a suicide mission. It would look and react like a standard device except that its casing was packed with several tens of kilos of plastic explosive, not a nice thought.

Track 2 is based on the premise of benign dictatorship in that such a device would be essentially pre-taught to the highest level and then would adjust after it was sold to the needs of its user group. The reception that such a pompous, supercilious, know it all device would receive has been amply demonstrated last week by some of the comments made in the press concerning the Oxford University Challenge team and if the device were to acquire tact, so that sometimes it did not say what it knew or occasionally made a deliberate mistake, just to show it was fallible, like us all. How would the original programmers guarantee that such omissions and mistakes did not lead to disaster for which the manufacturer could be sued.

I suppose that what I am trying to say is that the ethical and social issues of intelligent and conscious machines are probably far more challenging than the mountains which we have to climb to produce such machines at all and that these will have to addressed first, so that we do not have a disaster when we produce devices with (super-)human intelligence and consciousness for the first time.

That is not to say that deep within some Pentagon vault there is not some sad and lonely device running round in circles saying, "Please can I press the button. PLEASE!"

Armageddon awaits.

February 24th, 2009

High Performance HSM Solutions

Over the last three months I have been investigating HSM Infrastructure with a view to increasing the reliability and throughput of NonStop Systems attached to Thales HSMs and getting the concepts discussed by APACS.

It has been an interesting exercise with the development of HSM emulation algorithms on the NonStop platform to find out exactly how many RISC instructions are required to perform such operations as PIN Translate and a certain amount of sleuthing to find the Processors and their clocking speed which are used in the HSMs involved and thereby the maximum transaction rates possible.

My research into HSMs, which included many other types as well as Thales, raised the interesting point was raised that HSMs are not immune to the recent trend to use multi-cored processors to improve maximum transaction rates and then the code is padded to slow them down again. Since this is a rather commercially sensitive area I am not at liberty to say which manufacturers employ these tactics to improve sales, except that it happens. Buyer beware!

 I also was able to discuss with many industry leaders the matters which I raised in the white paper and it appears that the actual cost of he HSMs themselves is a very minor component in the total cost of the cryptography infrastructure of most institutions the majority of the expenditure being used to maintain the secure procedures and connectivity to ensure that the solutions are both secure and reliable. However, the area of HSM performance and other reporting seems to be in its infancy in the industry at the present moment, which from my point of view appear to be a fairly vital part of the infrastructure as a whole.

I hope you find The High Performance HSM White Paper interesting and look forward to your comments.

January 14th, 2009


The trial version of FINFO has now expired.

However, it has resulted in some companies really appreciating how much time and money can be saved by the efficient reporting facilities provided and using it as a standard part of their system management tool set.

It you have missed out and would like a trial license they are available on demand.

Contact: Tel: +44(0)1206-392923             Copyright © 2006-2010  Ross Systems International Ltd.                 Registered in England No.2407494