The computing sessions at this year's Royal Economic Society conference were organised by Ray O'Brien of the University of Southampton. The first double session on the Tuesday afternoon was devoted entirely to WinEcon. It began with a briefing on the progress of the project by Phil Hobbs, followed by a demonstration by Li Lin Cheah of the new improved versions of the student and lecturer's interfaces before we moved to the computer lab for a hands-on tutorial session. The second session on Wednesday morning was devoted to demonstrations and critical reviews of new and updated software. Ray O'Brien described the session as a "software sandwich" because in between presentations looking at software for modelling systems of equations based on time series data we had a review of software devoted to the analysis of semi-non parametric time series.
Phil Hobbs began with a brief review of the scope and nature of WinEcon (it seems that there are still some people out there who don't know anything about it!). He described it as a "unique comprehensive tutorial package for computer-based teaching and learning for introductory economics". He invited us to think of it as "an interactive textbook in a ring-binder" and stressed both its scope and its flexibility. The software offers more than 75 hours of tutorial material organised into nearly a thousand separate topics. In the default configuration these are arranged into 200 sections and 25 chapters. However each lecturer will be able to select and rearrange these topics to suit his or her own needs, customising the materials for the course requirements at a particular institution using WinEcon Lecturer, a courseware management program which goes with WinEcon. The WinEcon package also includes a full glossary of economic terms, references to key textbooks, extensive economic databases, supporting tools such as a calculator and a spreadsheet, together with a full set of self-assessment tests and examination questions. We were able to see all these features in the hands-on session afterwards.
Phil then gave a brief update on recent WinEcon events and a timetable for future developments. The Early Access version of WinEcon 4.0 had been released at the CALECO 95 conference last September which also saw the signing of a world-wide marketing and distribution agreement with Blackwell Publishers. In November the Economics Consortium had been awarded a medal by the British Computer Society for WinEcon. WinEcon 4.0 was released on CD-ROM in January to coincide with the Allied Social Sciences Association meetings in San Francisco, where it was exhibited by Blackwell and demonstrated at a series of invited presentations, getting a very positive reception.
In February the WinEcon 4.0 User Guide and Product Overview was published, and in March the WinEcon Web site was launched. (The URL is http://www.sosig.ac.uk/winecon/). From the web site you can access all of the information in the User Guide and Product overview. If you wish you can download the entire guide - but be warned, the full version is over 21Mb.
Reaction to WinEcon 4.0 has been very good. Blackwell have dealt with more than 900 sales enquiries by the end of March. WinEcon is being used in the UK and world-wide with more sites being added daily.
Planned developments for the rest of 1996 include the release of a maintenance version of WinEcon (4.0a) to registered users, and an enhanced version (4.1) which will include the course management tool. This release will incorporate a whole host of new features and improvements to the core of the WinEcon system. Both the courseware and the assessment database have undergone another round of reviews and fine tuning.
Also planned for later this year are: a Student Edition of WinEcon, the release of the WinEcon Authoring Tools, a series of WinEcon training workshops, and the publication by Blackwell of Interactive Economics, the WinEcon Workbook. Looking further ahead, a US version of WinEcon is planned for Spring 1997, and discussions are underway concerning a number of other overseas versions for Australian, Malaysian, Canadian and French users.
Lin Li started by saying that there are three standard groups of people who can run WinEcon; students, lecturers and developers. The developers obviously have complete access to all parts of the WinEcon package, but lecturers would have available a separate Lecturer's Interface which provides them with a Course Management system to customise the program for use at their institution and to help them undertake course administration tasks. This would be the main focus of the demo, but for those who might not have seen WinEcon before she first gave a quick look at the WinEcon Student Shell.
Loading up the program Li Lin showed us how students would first select a course from the list of predefined courses on offer (either one of the default courses provided in the package or a specially customised one created by a lecturer using the Lecturer's Interface). Each course would consist of a number of chapters made up of sections and topics. To illustrate the character of the tutorial material she skimmed through section 4 of chapter 2 which covers the cobweb model. We saw the use of the blackboard metaphor on the introductory page and the way in which a student would navigate through the material. The use of graphics and the interactive nature of the material were clear to see. She showed us how the student could make use of the other tools such as the glossary and references or use the copy button to copy text material to disk (via the Windows Notepad). Students could also customise some features of WinEcon themselves - for example they could choose a different tutor (professor) image from the default Phil Hobbs look alike. There is a message facility to give access to a bulletin board so that students could exchange comments or ideas and an application launcher. Clearly some of these features would be disabled by a lecturer when it was being used to give examinations to students.
Turning to the Lecturer's Interface (WinEcon Lecturer) Li Lin took us through each of the main menu choices: Administration, Assessment, Resources, Course Design and Configuration.
Administration enables you to define users and classes and to set options for usage logging (results will be exported to a text file). You can assign courses to a particular class and view test or exam results.
Assessment allows you to select questions for a test or an exam and to define rules and program behaviour to go with them. For example the defaults for the exam questions include the "Reveal if correct or wrong". If you wished you could cancel this feature. You can also edit or delete questions, or even add new ones using the template provided.
Resources allows you to modify glossary entries, references and publication details. So you could include references to your own work or your favourite textbook or articles in the Economist, even including the university library's reference code if you really want to help the students. You can also add in extra applications which can be launched from within WinEcon - perhaps for example Netscape so that you can go straight to the WinEcon web site or CHEER on-line!
Course design is used to put together customised courses. You provide a Course Name and select chapters, topics and sections, self tests and exam tests to make up the course.
Configuration is used to set directories and specify files to use for tutorial pages.
Li Lin quickly put together a temporary course called Welsh Economics consisting of only three topics, altering some of the settings, and then flipped back to WinEcon itself to show how it affected the student shell.
For the rest of the time we moved into a computer lab where WinEcon had been set up for a hands-on session. A special tutorial sheet had been prepared for us to work through, again focussing on the use of WinEcon Lecturer. New users were advised first to click on the Using WinEcon tab in the Student Shell. Then they could join the rest of us in working through various activities designed to show how the Lecturer's Interface can be used to set up a new course, add new users or classes, add new glossary and reference entries etc. The exercise was well- designed and most people present managed to work through it in the time available.
Note: WinEcon is being distributed and marketed by Blackwell Publishers and any enquiries should be directed to
108 Cowley Road,
Oxford OX4 1JF;
Tel +44 (01865 791100;
Fax +44(0)1865 205152;
In the first part of the "Software Sandwich" Richard Pierse of the University of Surrey talked about and demonstrated a prototype version of his software for model solution and simulation.
The software is designed to handle sets of equations, including non-linear equations. It has been written mainly for use by the macromodelling teams, but is easy to use and could be used as a teaching tool.
Richard talked about the history of macromodelling and the tools which modellers had used to assist them in their work. At the stage when models contained only adaptive expectations it was easy to use standard softare such as TSP, but with the advent of forward looking expectations modellers had either to use one of a limited number of commercially available programs such as TROLL and AREMOS, or write their own programs to deal with such systems. He mentioned NIMODEL and AMODEL, both examples of the latter type of program produced in the UK in the late seventies, used respectively at the National Institute and HM Treasury. Both are now rather long in the tooth and were designed for professional forecasters. They are not very user friendly and require some knowledge of Fortran.
Pierse felt that an up-to-date and easy-to-use program was now needed. A broken ankle (sustained when walking on Dartmoor while in Exeter for the RES conference in 1994) had meant that he had found himself working from home with few distractions and this had given him time to develop his ideas. Now he had received funding from the ESRC to undertake the task and he was about six months into the three year project.
The program should:
Key design features would be the ability to enter equations in simple algebraic form and the flexibility to add in new equations or edit existing ones within the program; it should be able to read in data files from PcGive, Microfit etc and it should run under Windows with all advantages of its object linking protocol and Help system.
Using the example of a DHSY consumption function Pierse showed how WinSolve's model description language allowed you to write an equation in a form much closer to a natural algebraic format than was usually possible.
We then had a brief demo to show how the program worked in practice. Pierse took for his first example a stable model of forward looking price expectations based on Fisher (1992) Chapter 4, showing how to enter the equations and set terminal conditions. In fact he showed that the model could be set up and read in from any text editor - here he just used Notepad. He then looked at a couple of more complicated models including an unstable model and one with two unit roots.
Pierse showed how the program allows you to switch between alternative equations so that you can vary just part of a model at a time. Reading in any necessary data was very easy and the data set is shown in a summary information box. Different terminal conditions can be selected, such as a fixed value, constant rate of growth etc.. Model solution offered both static and dynamic model options and it is possible to monitor the solution process or not as you wish.
He showed the solution for a stable model and then substituted in an equation with a unit root. Now no solution could be found within the default 100 iterations. The moral of the story is - first make sure you have a stable model.
Finally, he showed how the software coped with a much bigger model, namely the 1000 plus equation National Institute model that he was working on with Andy Blake. The model has a maximum lag of 8 periods and leads of 4 periods and in the WinSolve implementation a number of alternative equations were included which could be exchanged for the defaults.
The middle of the software sandwich was provided by Jan Podivinsky who talked about SNP: A Program for (Semi)Non- Parametric Series Analysis written by A. Ronald Gallant of the University of North Carolina and George Tauchen of Duke University. Jan explained that he had come across the program after reading a paper by Tauchen on an efficient method of moments. It had been around for some time and was readily available (although not so readily usable) and he thought that it should be more widely known.
Podivinsky explained that the program was motivated by the need to estimate the (one period ahead) conditional density of a stationary multivariate time series process. It was set up so that you get everything you want, incorporating extra features for prediction, plotting and simulation analysis. It uses a particular form of polynomial series expansion.
The program is a public domain package, available by anonymous ftp from ftp.econ.duke.edu in the directory pub/arg/snp where you can find the program Fortran code and Postscript User Guide. There is also a PC executable version (i.e. you don't see the Fortran code). All are free, without warranty, for research purposes.
Podivinsky said that there was considerable accumulated experience in the use of the package, which is now in version 8, and it is well documented. It would also repay potential users to read the many published papers which have applications based on the package - he mentioned particularly one by Gallant and Tauchen on asset prices in the 1989 volume of Econometrica and Hussey's 1992 Journal of Econometrics paper on asymmetry in business cycles.
In trying to answer the question "What does the Program Do?" Podivinsky said it was difficult to explain on one side of A4. It uses a Hermite polynomial expansion which allows a complex generalised form of multivariate equation with potentially a large number of parameters to be estimated. It can capture a wide range of behaviour in multivariate processes. It relies on three fine tuning parameters, L mu (the number of lags in the location shift mu x), Lr (the number of lags in the scale shift Rx) and Lp (the number of lags in the x part of the polynomial). Your reporter had some difficulty in following the details of this theoretical material and was relieved afterwards when Podivinsky agreed to a request to produce a detailed review for publication in the Software Review Section of the November issue of the Economic Journal.
Podivinsky did not provide a demo of the program but he did discuss issues of parameter setting in a number of different types of models, including Gaussian VAR, non-Gaussian VAR, Gaussian ARCH and non-Gaussian ARCH models. He explained that the program follows a model selection procedure based on the use of the Schwartz BIC to move along an upward expansion path, beginning with a simple model and expanding it until an adequate model is found. Misspecification testing is based on the examination of the residuals using two main tests; a "short- term" test based on the significance of a regression of the residuals (or their squares) on linear, quadratic and cubic terms in lagged values of the y variable, and a "long-term" test based on regressions of residuals or their squares on a set of annual dummy variables to check for failure to capture long-term trends.
Podivinsky commented on the fact that a forthcoming Econometrica paper by Fenton and Gallant shows that SNP is as good as optimal kernel estimation. At this point unfortunately the OHP bulb went, causing him to move into "arm waving mode". Fortunately by this time he was close to the end of his presentation and he summed up by giving an overall evaluation of SNP. On the one hand its construction using Fortran gives it a rather old fashioned look and feel and it is probably something only for specialists. However, it is freely available, well documented and flexible. It is a good competitor to kernel density estimator procedures and deserves to be more widely known. Most applications so far have been in financial and related areas but there is no reason why it couldn't be used more generally.
The bottom of the sandwich was provided by Neil Blake of BSL and Lester Hunt from Portsmouth who gave a review and demonstration of AREMOS. I will keep my comments here brief as a full review by them of AREMOS has now been published in the May issue of the Economic Journal.
Lester began with a brief history of the package, emphasising that AREMOS is expensive and primarily aimed at business users. It is an all embracing package, covering everything from simple data handling to model simulation. A new beta Windows version is now available but their review and presentation was based on the DOS version of the program.
The program is command driven, although it includes some drop down menus which can be useful for new users. Once you get used to the program it is easiest to use the command line to issue instructions. It is interactive so each instruction is acted upon immediately. You can also put together sets of commands as PROCEDURES (macros). Data handling in AREMOS is very flexible. You can store and work with data of different frequencies at the same time. AREMOS has available a number of different types of databank: in particular Hunt distinguished between the "work" bank, which is created automatically every time you start an AREMOS session and the "private" banks which can be created by the user. Both store data and other AREMOS objects, such as equations, results etc..
Data manipulation is straightforward via instructions for standard mathematical transformations or the creation of dummy variables and by matrix procedures. Many of the key commands for data manipulation are described in the EJ Review, including the useful GENERATE and UPDATE commands which are useful for recalculating transformations of series which are subject to frequent updating.
Results can be viewed on screen or in hard copy form. There are some slightly different commands for the hard copy printouts (TABLES instead of PRINT for example) but you can create table frames which can be stored as templates for later use. Graphs are used in AREMOS both as part of the preliminary data analysis and when producing a forecast. Graphical images can be saved either as hard copy files or for insertion into wordprocessed documents.
All standard econometric techniques are available, including OLS, 2SLS, 3SLS, IV, Non-Linear Least Squares etc. with options to allow for AR or MA errors. One nice feature of the package is that it is not necessary manually to create log transforms of variables in a log-linear regression - you just specify the equation in terms of the logs, for example:
EQUATION EQ1 Log(Y)=Log(Y)[-1],Log(X1),Log(X2);
There is also a helpful NORMALISE command to transform residuals from a multiplicative (log-linear) regression back into the units of the original series.
Hunt described the standard regression output and commented that although there is a full range of conventional statistics it was disappointing that AREMOS does not provide many of the modern diagnostics that we have come to expect. The program has rather fallen behind in terms of econometric techniques - for example there are no automatic unit root tests. Although these can be constructed manually they ought to be available as built-in procedures. More seriously AREMOS does not include any facilities to estimate multivariate cointegration models by the Johansen method, making it of rather limited value for academics or business economists keen to use the most up-to-date techniques.
In conclusion, Hunt said that the program has many good features, particularly for data handling and producing tables of results for reports, but efforts need to be made to bring the techniques in AREMOS up-to-date. The current DOS version also looks rather old-fashioned in terms of its user interface but this should be overcome in the new Windows version.
Lester Hunt then handed over to Neil Blake who provided an illustrative demonstration of the way AREMOS works, based on a consumption function example. He showed how you could call up a previously stored CMD (command) file and eventually put the results into predefined table templates. Illustrating Hunt's point about the use of series transforms in equations he showed that you don't need to create logged and differenced variables, you just write them down appropriately at the time of equation specification. He showed how to produce your own Dickey-Fuller tests and how to use your estimated model to produce forecasts.
Another area in which AREMOS has fallen behind came to light in this section of the presentation. There are no automatic procedures for incorporating forward looking expectations, although you can program up your own procedures for simple formulations.
In discussion members of the audience expressed the view that AREMOS in its current state is deficient in a number of ways and certainly suffers in comparison with TROLL. Confirmation was provided, it seemed, of the need for Richard Pierse to complete his work on WinSolve.
Footnote: My impression of the RES Conference generally this year was that it was extremely well-organised with a good range of interesting and well-presented papers. Since we have been talking about "software sandwiches" I will end on a culinary note. The food too was good, although few delegates took the risk of eating the traditional welsh lava bread for breakfast.