The paper on IT and economics was given by Holly Sutherland, Director of the Microsimulation Unit in the Department of Applied Economics at Cambridge. After some general reflections on the differences that IT has made to the practice of applied reseacrh in economics, the main focus of her paper was on the use of microdata in policy simulation models.
Sutherland emphasised how far we have come by quoting Durbin's comments in his Econometric Theory interview with Phillips about how "computing" was carried out in the DAE in the 1950s. There was a room of eight to ten women operating desk calculators, supervised by "an older lady of forbidding manor". (As I said in my comments, we have gone from dragons to software wizards!).
Drawing upon examples based on the use of POLIMOD (the model developed by the Microsimulation Unit at Cambridge) Sutherland showed how the detailed effects of different tax/benefits proposals could be scrutinised, looking at distributional issues as well as the effects on overall aggregates. She noted that because laptop computers are now widely available, small and portable with user friendly interfaces, computer simulation exercises can now be be fully integrated into the debate and not just turned to for number crunching exercises running out of sight in the backroom.
Sutherland noted that the new technology was throwing up new challenges, as well as new opportunities. For example, as her experience with POLIMOD highlights, it is often hard to distinguish a model's database from that of any single raw dataset upon which it builds. In combining data with other information one redefines it and adds value to it. She referred to a trio of "C" problems for data users, "copyright", "confidentiality" and "commodification". Statistical agencies had to reconcile the need to generate revenue to pay for their activities with the needs of the researchers who wished to gain access to infomation. A pessimistic view is that we might find ourselves in a situation where only priviled "insiders" can gain access to the data if restrictive pricing policies were adopted for data use.
As discussant for the paper I echoed many of Sutherland's points, adding some comments about the role of IT in macroeconomics and econometrics. The more powerful and faster computers which became available after the development of the microchip and the software which was written to go with them meant that researchers could more easily work with models with complex functional forms, lag structures, expectations generation mechanisms etc. Monte Carlo studies and bootstrapping procedures requiring many thousands of replications have become valuable means of investigating the properties of models, estimators and tests. Computable General Equilibrium models as well as the older input-output models are now established forms of investigation in economics. (Quoting Augusztinovics from the special issue of Structural Change and Economic Dynamics published last year to celebrate Leontief's 90th birthday I referred to the fact that as recently as 1960 it took 16 hours to invert a 12x12 matrix of the computer available to the Hungarian Academy of Sciences.)
I suggested too that econometric methodology has been affected by IT developments. The General-to Specific methodology, advocated by David Hendry and based on the early ideas of Denis Sargan, requires extensive estimation, testing and retesting. Although it might have been possible for a dedicated and patient researcher to follow the Hendry strategy on old slow cumbersome computers with unfriendly input-outpu devices, it is doubtful whether this approach could have gained widespread acceptance under these conditions.
Many of the papers in other subject areas highlighted two important ways in which developments in IT have changed the nature of the work undertaken. First, IT has made the storage and processing of large scale micro databases common in a number of subjects, and shown how data brought together from a number of sources can illuminate issues. Bob Morris, from the Department of Economic and Social History in Edinburgh, described how information from Leeds in 1832 on individual voting habits, income and expenditure, coming from different sources, could be put together in a spreadsheet and cross-tabulated to identify common patterns and interesting discrepancies worth following up. Second, visualisation based on computer graphics has been able to reveal patterns in the data, particularly those based on finely defined categories. Wilson showed how health information from West Yorkshire showed a rather different story when it could be plotted on a fine scale, revealing patterns which were not visible from a more highly aggregated set of indicators.
Other papers in subjects as diverse as anthropology, the history of art, law and psychology were equally interesting and stimulating. The procedings are to be published in book form next year and would be well worth reading.