Visual Programming:
The Outlook from Academia and Industry

K. N. Whitley and Alan F. Blackwell

In S. Wiedenbeck & J. Scholtz (Eds.),
Proceedings of the 7th Workshop on Empirical Studies of Programmers, pp. 180-208.

Abstract

This paper presents three surveys of beliefs about the cognitive effects of visual programming. One survey examined the visual programming literature for the opinions of academic researchers. The second and third surveys gathered the opinions of the users of programming languages: the second survey questioned professional programmers attending a trade show and the third was a world-wide survey of LabVIEW programmers conducted via the Internet. The first survey revealed speculative, optimistic views of visual programming amongst visual programming researchers. The second found that a broad sample of professional programmers were the most skeptical of the three groups. The third indicates that LabVIEW programmers are confident that the visual programming provided by LabVIEW is beneficial. Taken together, these surveys illustrate the wide range of unanswered questions about the cognitive effects of visual programming languages. In doing so, this research hopes to assist in the process of establishing a more rigorous scientific investigation of these cognitive effects.

1. Introduction

The visual programming (VP) community has produced many ideas in the past decade with the goal of ameliorating the difficulties of programming. These ideas, in turn, have raised questions about the cognitive processes required in programming and are of particular interest in the context of empirical studies. For example, many visual programming languages (VPLs) have been designed to reduce the mental effort of programming. The design decisions for the VPLs are, currently, the result of (educated) guesses about how to facilitate the programmer's cognitive processes, rather than the result of predetermined theory. The importance of VPLs has been justified not in terms of the computational or mathematical properties of the languages, but in terms of unvalidated claims that they provide cognitive benefits to programmers. Precisely because of this lack of theoretical or mathematical basis, the claims surrounding VPLs can only be tested by empirical investigation.

There have been a range of empirical studies comparing the performance of programmers using textual and visual representations, as reviewed by Whitley (1997). Moreover, there have been recent attempts to systematize the factors that influence human performance when using any kind of notation, a prominent example being Green's cognitive dimensions framework (Green, 1989; Green & Petre, 1996). However, the need for empirical results that shed light on how visual representations impact the programming process far outweighs the current body of knowledge. The fact remains that the VP community lacks a well-founded empirical argument in support of VPLs (a situation which is referred to as the evidence problem in the duration of this paper).

This paper addresses the evidence problem by presenting results obtained from three projects. All three were surveys of opinions about VPLs and are informative not only for their individual findings, but also because they were designed to allow direct comparison of their results. The first of these was a literature survey that found speculative and highly optimistic views of visual programming amongst academic computer scientists. The second was a questionnaire suggesting that professional programmers, as a whole, may be more skeptical about whether visual programming benefits the programming process. The third involved collecting the opinions of 227 programmers experienced in using a specific VPL (LabVIEW); this paper presents the results of a random sample of the full data set, which indicates that LabVIEW programmers are confident that the advantages of the visual programming provided by LabVIEW outweigh its disadvantages. Each of these surveys is the first study to investigate the opinions about visual programming of their respective subject populations. Taken together, these three provide the most comprehensive analysis of current opinions toward visual programming to date.

In the first survey, an examination of the VP literature revealed a range of statements made by VP academics about the ways in which they think that visual programming assists a programmer's thought processes. These statements have been grouped into categories. In the second and third projects, the surveys administered to the programmers and LabVIEW programmers collected the same types of opinions by asking them open-format questions about how visual programming affects their thought processes. Additionally, these questionnaires asked respondents to rank the relative advantages of textual and visual programming languages along several dimensions. Thus, the questionnaires provided a larger range of opinion from each respondent, and more analysis is possible on the resulting data.

These three surveys of opinions are valuable in two ways. First, the survey of the VP literature outlines the extent of the evidence problem for the VP community. Even more importantly, some of the opinions provide good directions for subsequent research. In general, people's perceptions of their cognitive processes are not necessarily accurate; thus, a person's claims as to how and why a VPL impacts the programming process cannot be accepted as fact without further empirical study. However, people's opinions do provide a starting point: the claims of the researchers are their predictions reached after contemplating the programming process; the programmers in the second survey provide the speculations of experienced programmers; and the LabVIEW programmers provide the insight of people whose opinions have been based on experience using VPLs. In brief, the collected opinions may provide hypotheses for future, targeted empirical studies.

The full description of the projects is organized as follows. Section 2 summarizes two empirical studies in the VP literature that are relevant to the Project 3 survey. Section 3 describes the motivations and methods used in each survey, while Sections 4 and 5 give more detail on the analysis method used on the open-format responses from all three surveys. Section 6 presents the results of the semantic analysis of the open-format questions, while Section 7 presents the statistical analysis of ratings collected in Projects 2 and 3; note that, whereas the semantic analysis of Project 3 was applied to a random sample of the collected data, the statistical analysis of the ratings includes all collected data. Section 8 concludes the paper with a summary of the highlights.

2. Background

As a field, visual programming encompasses a wide variety of suggested visual notations. The Project 1 and 2 surveys are general in that neither limited its focus to one VPL. Project 1 involved collecting all opinions in the VP literature about how visual representations might impact programming, and the Project 2 questionnaire described visual programming generally as the use of a language in which "the programmer does almost all programming by manipulating diagrams instead of typing text." In contrast, Project 3 limited its inquiry to LabVIEW. LabVIEW (Laboratory Virtual Instrument Engineering Workbench) is a programming environment that features a dataflow-based VPL (called G) which was designed to facilitate development of data acquisition, analysis, display and control applications. Moreover, one of LabVIEW's marketing claims is that LabVIEW is so usable that it is an effective tool not only for trained programmers, but also for certain types of end users. In particular, LabVIEW is described as usable by scientists and engineers who possess limited programming experience, yet who need software to interact with laboratory equipment. One of the few commercially-available VPLs, LabVIEW has enjoyed relatively wide success compared to other VPLs, the majority of which are research prototypes.

LabVIEW is an attractive choice for an empirical study of visual programming for several reasons. For example, the fact that LabVIEW has been commercially available for 10 years means that one can find a sizable population of people who have used. Thus, given the desire to study an expert population, LabVIEW becomes a natural candidate. Moreover, within the fairly small number of empirical studies that have investigated visual representations in programming, two prior studies of LabVIEW pose some interesting questions.

One of these studies is an industry-based, observational study reported by Baroth and Hartsough (1995) which describes their organization's experience using two VPLs (LabVIEW and VEE) to build test and measurement systems. Of particular interest is their case study which compared the progress of two development teams (one using LabVIEW and the other using the textual language C) who were, in parallel, each developing the same system. The goal was to discern how LabVIEW would compare to conventional, text-based programming. Both development teams received the same requirements, the same funding and the same amount of time (three months) to complete the project. At the end of the allotted time period, the C team had not achieved the original project requirements, yet the LabVIEW team had gone beyond the requirements. From this case study and from experience using LabVIEW and VEE on over 40 other projects over the course of three years, Baroth and Hartsough enthusiastically report performance benefits for both VPLs. They report that projects using LabVIEW and VEE require 4 to 10 times less time to complete than if attempted using a textual programming language. Baroth and Hartsough attribute the benefits of the two VPLs, in large part, to the visual representations of the VPLs. With respect to the case study, Baroth and Hartsough attributed the results in large part to LabVIEW's visual syntax. They claim that LabVIEW's visual notation is more readable for certain classes of end users than traditional textual programming languages. Because LabVIEW's visual syntax produces programs resembling wiring diagrams, Baroth and Hartsough claim that LabVIEW is relatively easily learned by engineers and scientists who are familiar with wiring diagram notations.

In contrast, a controlled study by Green, Petre and Bellamy revealed no benefits resulting from LabVIEW's visual notations for conditional logic (Green, Petre & Bellamy, 1991, Green & Petre, 1992). Their controlled experiment compared the comprehensibility of LabVIEW's conditional representations (LabVIEW provides two ways to represent conditional logic) to two different textual notations. Green et al. tested comprehension in terms of response time required to answer questions about code segments. Their subjects were 11 experienced programmers: Five had used LabVIEW in their work for at least six months, while the other six were advanced digital electronics designers and, thus, experienced in using electronics schematics. LabVIEW elicited worse performance (i.e., longer response times) on all of the comprehension questions. In fact, the text outperformed LabVIEW for each and every subject (each subjects' mean visual versus textual times were compared).

In choosing to survey LabVIEW programmers, Project 3 aimed not only to allow comparison to the findings in Projects 1 and 2, but also to help shed light upon the "contradictory" results obtained in the Baroth and Hartsough study and the Green et al. work. In other words, what accounts for the difference between Baroth and Hartsough's positive opinion about LabVIEW's visual notation and Green et al.'s finding that, for the same class of end users, no benefits were evident for LabVIEW's visual conditional notation? There are several explanations that could account for this gulf. For example, LabVIEW's visual representation for constructs other than conditional statements may be beneficial. In other words, since Green et al. only tested the visual notation for conditional statements, the advantage reported by Baroth and Hartsough might be due to the visual notation for other types of statements. Another possibility, however, is that Baroth and Hartsough were incorrect in attributing performance benefits to LabVIEW's visual notation. It may be that LabVIEW does improve programming but that the cause of the improvement stems from language features other than the visual syntax. Project 3, therefore, asked experienced users of LabVIEW what they think about the LabVIEW's visual notation, taking care to distinguish between LabVIEW's visual notation and other LabVIEW features that may account for any benefits of using the programming environment.

3. Motivations and methods

3.1. Project 1: Survey of the visual programming literature

This project examined the VP literature for claims made by computer scientists about the cognitive benefits of visual programming. VP researchers often advanced such claims in their introductions to books and research papers in order to explain the benefits of their research. A more detailed report of this survey can be found in (Blackwell, 1996). In this report, Blackwell not only reports claims made in this literature, but also evaluates the claims in terms of relevant research in cognitive psychology (when such research exists). The original sources of the claims are rarely cited within the original VP literature. Nevertheless, Blackwell found that a few of the researchers' claims do appear to have some support from research in experimental psychology and cognitive science. In presenting these bases from experimental psychology and cognitive science, Blackwell's paper serves as a complement to the assessment of empirical studies relevant to VPLs given in (Whitley, 1997).

This survey concentrated on widely-available publications, in order to make reasonable conclusions about the beliefs held by the VP community. These publications included IEEE and ACM titles such as Communications of the ACM, IEEE Computer, IEEE Software and IEEE Transactions on Software Engineering. They also included the Journal of Visual Languages and Computing, several large conferences and a number of popular textbooks and edited collections. Specific papers were selected for analysis whenever they made a statement about the nature of programming, the nature of thought or problem-solving or the nature of vision. Papers which discussed only computing concepts or graphical user interface design were not selected. Approximately 140 papers were analyzed, of which 41 made substantial statements expressing cognitive theories. Blackwell's extended report cites the 41 papers which were selected for further analysis (Blackwell, 1996). The publication dates for the selected papers ranged from 1977 to 1995, with a median year of 1988.

3.2. Project 2: Survey of professional programmers

Project 2 was undertaken to find out whether the beliefs collected in Project 1 were specific to VP researchers. This project expanded the search for cognitive claims about visual programming by targeting some of the intended users of VPLs - experienced, professional programmers - by way of a questionnaire. This study population contrasts with VP researchers in that experienced programmers who have not used a VPL are unlikely to be familiar with the VP research literature. As a result they might be expected to have independent opinions of the value of visual programming. Moreover, in contrast to Project 3 (described below), these respondents were not expected to be experienced in using VPLs; rather, it was hoped that, based on a description of visual programming, these programmers might have initial opinions based on their general programming experience. This survey objective should be remembered when assessing the Project 2 result, as the opinions are more likely to be based on incorrect conceptions of visual programming.

The survey was conducted using a written questionnaire (shown in Appendix A) that was distributed at the entrance to a trade show. Care was taken in wording the questionnaire: it was described as a study of 'graphical programming' rather than 'visual programming'. The term 'graphical programming' was used to avoid possible confusion about the definition of 'visual programming'. Even though 'visual programming' is an accepted term in the research community, wider audiences often associate 'visual programming' with products that are not classified as VPLs by VP researchers. For example, before the Internet newsgroup comp.lang.visual became moderated, the bulk of its traffic resulted from programmers making queries specific to the Microsoft Visual Basic and Visual C++ products. Visual Basic provides a visual means for the programmer to create graphical user interfaces, but because the visual aspect of the language is confined to GUI building, many VP researchers do not consider Visual Basic to be a VPL. Thus, the questionnaire emphasized that products such as Visual Basic are not graphical languages because such products require that program logic be typed in as text.

The first question on the questionnaire asked about the nature of a respondent's programming activity; this question allowed differentiation between three groups: people who program professionally; people who are not hired as programmers, per se, yet whose job involves some programming (i.e., end users); and people who program as a hobby. The second question asked how much exposure a respondent had had to visual programming. Then, in order to ensure that the point about Visual Basic had been assimilated, respondents were asked to name the VPL that they were familiar with. In the third question, respondents were asked to compare graphical and textual programming languages along five different dimensions: ease of use, power, enjoyability, readability and changeability. Finally, in the fourth question, respondents were invited to make cognitive claims of the kind made so often by the VP researchers. This was achieved by asking about how graphical programming languages might make a difference to the "brain-work" involved in programming.

The survey was carried out at the EXE Developer's Show in London, June 15-16, 1995. This show was organized by the popular British programming magazine, EXE magazine. EXE might be considered as the British equivalent of American Programmer or Dr. Dobb's Journal - it contains a mixture of opinion columns and educational articles and is widely read by professional programmers. It does not publish academic papers, but would discuss topics such as visual programming in the context of a product review. Readers of the magazine would quite likely, therefore, be aware of new trends in software, but not necessarily be familiar with detailed cognitive theories underlying computer science research. Those attending the show appeared to be typical readers of the magazine, judged by informal observation of the questions being asked at display stands and by conversations overheard in the aisles.

In all, 506 questionnaires were distributed, meaning that more than 50% of the estimated attendees at the show received questionnaires. A total of 88 were returned, which is considered a high rate of response for survey material distributed at a commercial trade show. According to their answers to Question 1, all but two of the respondents (98%) were professional programmers. Despite the instructions given in Question 2, 25% of the respondents did name Visual Basic (or a similar product) as the VPL that they were familiar with. This points out a difficulty of surveying opinions about novel programming languages. Although the questionnaire never used the word "visual" and explicitly excluded Visual Basic, respondents who were unfamiliar with visual programming tended to respond in terms of the language that they knew. In some of these cases, respondents may have extrapolated from Visual Basic to form a concept of visual programming; in others, the respondents may have confined their remarks to Visual Basic in its current, non-VPL form. Thus, as stated above, the opinions gathered in Project 2 are more likely to be based on incorrect conceptions of visual programming.

3.3. Project 3: Survey of LabVIEW programmers

Project 3 returned to the question behind Project 2: What beliefs about the cognitive effects of VPLs are held by the intended users of VPLs? Yet, in contrast to Project 2, this survey targeted a more knowledgeable subject population: programmers who have experience programming in LabVIEW. This survey was conducted using a questionnaire, which can be seen in Appendix B. The questionnaire is divided into four parts.

The first part collected information about a respondent's programming background. Question 1 asked for an email address (used to award incentive prizes supplied by National Instruments, the maker of LabVIEW). Question 2 required a similar programmer/end user categorization to that in Project 2, but added an extra choice (academic teachers of computer science). Question 3 sought a qualitative measure of respondents' programming experience, in terms of the size and number of projects completed both in LabVIEW and other languages. Question 4 asked respondents to name the programming language that they have had the most experience using.

The second part of the survey sought to elicit a respondent's opinions about LabVIEW as a whole, without emphasizing the visual aspects of LabVIEW. Because this survey was exploratory in nature, we chose to include several open-format questions in an attempt to learn what LabVIEW programmers consider to be the important aspects of LabVIEW. Three of these open-format questions appear in this section of the survey: Questions 5, 7 and 8 ask respondents for their overall opinion of LabVIEW, examples of how LabVIEW makes programming easier, and examples of how LabVIEW makes programming difficult.

In contrast, Question 6 is much more specific. As mentioned above, Baroth and Hartsough's observational study does not allow attribution of their productivity increases to any particular aspect of LabVIEW. This issue is very important for the VP community: when empirically studying a visual representation, one must take care to isolate the visual representation from all other aspects of the language under study. In the case of LabVIEW, there are definitely other product features that can confound any study attempting to assess the impact of LabVIEW's visual notation. Question 6 asked LabVIEW programmers to assess the importance of various LabVIEW features using a 6-point rating scale. The listed features of LabVIEW are as follows. The entry 'LabVIEW Toolkits/Reusable VIs' refers to the fact that National Instruments supplies LabVIEW buyers with reusable libraries of LabVIEW code; moreover, LabVIEW programmers can also find/buy more pre-made VIs (this stands for virtual instruments - LabVIEW's term for software components) from additional sources. The entry 'National Instruments Hardware' refers to the range of data acquisition and control hardware available from National Instruments. The entry 'LabVIEW's Graphical Language (G)' refers to the visual representation offered by LabVIEW. The entry 'G's Use of Dataflow Programming' refers to the fact that, in addition to using visual notation, G is based on the dataflow paradigm. The entry 'LabVIEW's Support for "Front Panel" Interfaces' refers to the fact that the LabVIEW environment divides a LabVIEW program into a block diagram (which holds the code that describes the behavior of the program) and an instrument panel interface; LabVIEW programmers can often easily create (without writing any code) the interface to their systems. Finally, the entry 'Customer Service for LabVIEW' refers to a range of National Instruments' customer support facilities.

The third part of the survey asked explicit questions about the visual aspects of LabVIEW (i.e., LabVIEW's VPL G). This section is very similar to the questionnaire used in Project 2. In Question 9, respondents are asked to compare G along several dimensions against a textual programming language of their choice. The first five of the dimensions are identical to the dimensions used in Project 2: power, ease of use, readability, changeability and enjoyability. Question 9 also adds two other dimensions: whether the language adequately supports repetitive logic (i.e., the writing of loops) and whether the language adequately supports conditional logic (i.e., the writing of if-then-else statements). Question 10 is identical to the Project 2 questionnaire; it asks respondents to explain how and why the graphical nature of G affects the "brain-work" required in programming.

Finally, the fourth section of the survey consists of one question which asks for any final thoughts about LabVIEW or about the survey. This question, in conjunction with the other open-format questions, gave respondents ample opportunity to remember interactions that they have had using LabVIEW.

The questionnaire was administered electronically in two versions, a WWW version and an email. The WWW version used a cgi-script implemented in Perl to process a respondent's survey submission. One large advantage of the WWW administration was that the cgi-script allowed on-the-fly checking of respondents' submissions. We devised our script to check each survey submission for complete answers. If the script detected missing information in Questions 2 through 6 or Question 9, it asked the respondent to supply the missing information. In this way, we successfully increased the number of complete responses that we received.

Data collection for this survey focused on two main sources of LabVIEW users. The main source was info-labview, a mailing list for LabVIEW programmers. As of January, 1997, info-labview had approximately 2,300 subscribers. The second main source of respondents came from a list of email addresses given in a directory that National Instruments compiled of academics who use LabVIEW either in conducting their research or in teaching their classes. Direct email was sent to all 104 email addresses listed in this directory. Additionally, National Instruments had a link to our survey from their WWW page, and we solicited respondents from the Internet newsgroup comp.lang.visual. Data collection proceeded from March 11, 1997 to April 1, 1997 (a three-week period). In this time, we received 227 complete responses.

4. Open-format responses: creation of themes

Projects 1, 2 and 3 used the same coding method to analyze the opinions of the respondents; this method applied to all of the text collected in Project 1 and all of the answers given to the open-format questions in Projects 2 and 3. This coding method requires breaking text down into its constituent opinions and, then, grouping the opinions into one of a list of semantic categories, or themes. Thus, our first step was to define a set of themes that would account for the data. To establish the themes, we read through all of the text from Projects 1, 2, 3 and considered each new type of opinion. The result of this process was a list of 15 main themes (plus a miscellaneous category) which we, then, discussed at length, in order to clarify the coding criteria that would be applied. The remainder of this section defines each theme. Details about the subsequent coding process are given in the Section 5.

Note that the themes presented in this study differ slightly from the themes used in the earlier study of the Project 1 texts (Blackwell, 1996). For the current study, the decision to redo the theme definition process was made for two reasons. First, the addition of the Project 2 and 3 data revealed a few concerns that did not fit well within the original themes. Second, compared with the Project 1 subjects, who were extremely positive in their opinions, the Project 2 and 3 subjects were more likely to make negative observations about visual programming. Thus, we established a two-dimensional coding scheme. In our resulting scheme, each opinion is placed within one of the semantic themes and also is coded as being 'positive' or 'negative' towards visual programming. The positive/negative dimension also included a 'conditional' code (for opinions that explicitly expressed a conditional outcome) and 'unclear' codes (for opinions in which a positive/negative assessment could not be made); however, these two codes were rarely needed.

4.1. Practical impact of visual programming

Theme: General impact. This theme contains simple statements describing visual languages as being easy to use, without expressing any specific benefits or reasons why this should be so. Examples of these statements include assertions that visual programming makes programming more user friendly or that visual programming makes the brain-work easier (or more difficult). Also included are opinions about whether visual programming's ease of use makes programming more accessible to different classes of non-experienced users (e.g., programming students, end users, children) who may find textual programming to be too difficult.

Theme: Learnability. This category holds all claims about whether VPLs are easily learned. In some cases, respondents explicitly stated that VPLs are easily learned or, even, that training is unnecessary. More often, respondents described visual programming as "intuitive", "natural" or "obvious", which are all ways of claiming that people recognize its principles of operation without having to learn new or difficult concepts. In some cases, respondents may have been referring to some idea of an "innate" human ability to understand a notation; as pointed out in (Blackwell, 1996), the notion that a notation can be understood without training is unlikely to be true. In contrast, some respondents may have been implicitly referring to the fact that a notation was natural to them because similar notation had been encountered in some other specific context. Thus, this theme may overlap with statements in the theme Applying real world experience (defined below).

Theme: Productivity. Professional programmers are highly concerned with their productivity - how fast they can complete a project. For the professional, this is even more important than the ease of learning a new language, because learning a language is a worthwhile investment if it results in increased productivity. Consequently, this theme contains all claims attributing changes in productivity to visual programming, ranging from simple claims that "visual programming improves development speed" to statements that mention specific programming tasks. Historically, programmers have been concerned with actual speed of coding, as was assessed in "lines of code per day" measurements used by large companies to compare individual performance. However, our surveys reflect an increasing concern with whole lifecycle costs of software, including design, test and maintenance, as many statements about productivity referred explicitly to tasks other than coding.

4.2. Comparisons of visual and textual code

Theme: Readability. Much attention was focused on how VPLs fare in representing different aspects of program structure and behavior. This theme holds the cases in which respondents stated, without explanation, that VPLs are easier (or harder) to read. It also includes the cases in which respondents elaborated about how visual representation can highlight semantic relationships between program entities or, in contrast, about how visual representation can result in confusing "spiders webs" of code. In related claims, some respondents stated that VPLs give them a fast Gestalt view of their program (described, for example, as a grasp of "the Whole Picture"). Finally, in a few cases respondents commented on the information density of visual code (i.e., the amount of space it takes to express an amount of information). For example, expressing arithmetic expressions using visual notation can be much less compact than using textual notation. As a result, the code may become less readable (which is perhaps why some respondents commented on the need for textual arithmetic notation in the theme Retention of text). The effect of information density on various aspects of a language (e.g., loops, case statements), of course, might be different.

Theme: Documentation effects. This category is an extension of Readability. It contains remarks that, because visual programs are so readable, a programmer does not have to write much additional documentation (in other words, that the visual code is self-documenting). Moreover, it includes the related claims that visual code is clear enough to provide a good communication mechanism between people (e.g., between members of a programming team ).

Theme: Syntax reduction. For some, visual programming provides freedom from the minutiae of programming language syntax. For example, some VPLs require fewer keywords and punctuation marks (braces and semi-colons, in particular). Similarly, some VPLs reduce the need to declare as many variables, via the use of wiring metaphors that can reduce the need to name intermediate values. Some respondents mentioned specific instances of reduced syntax as benefits of visual programming. Others were less specific about the type of syntax reduction, but described VPLs as allowing them to skip the process of programming: they described VPLs as offering "automatic" programming and stated that the freedom from programming allowed them more time to spend on design. Attributing the cause of the claims about "automatic" programming is difficult, as such a claim could result from either thoughts about syntax reduction or about the "semantic gap" between the programmer and the computer; thus, some overlap between this theme and the Mental models theme (described below) surely exists.

Theme: Modular and object-oriented programming. Most VPLs support the modular nature of programming either by presenting modules as icons on the screen or by physically enclosing modules within a diagrammatic boundary. Some respondents regarded this as a noteworthy benefit of visual programming; they implied that visual programming somehow encourages modularization more than standard, textual programming.

Theme: Retention of text. This theme contains all explicit observations about the role that text plays in visual programming. This includes opinions about how text can be most beneficially combined with visuals in VPLs (for example, the idea that VPLs should provide a textual mechanism for expressing arithmetic expressions).

Theme: Power and Computability. This theme includes two types of opinions about the ultimate applicability of VPLs, both of which were expressed mainly by the Project 2 respondents. The first type of comments dealt with issues of language level. In particular, some respondents seemed sure that visual programming is linked with higher-levels of programming - in other words, that VPLs hide the features of the machine, and even of the operating system, by providing new levels of abstraction within the language. Second, some respondents raised a concern about scalability - the idea that gains of using visual programming may exist for small projects, but that, as project size/complexity increases, the gains of visual representations compared to textual representations would disappear.

4.3. Cognitive processes in visual programming

Theme: Human visual capabilities. This theme includes claims about how the human mind processes text and images. In these cases, respondents asserted that human perception and processing of the kind of diagrams used in VPLs is fundamentally different to the perception and processing of text. Examples include the ideas that the human brain is optimized for visual processing; that the human brain reads images in "chunks", thus allowing a higher input bandwidth than is possible with text; and that VPLs tap the underused right hemisphere of the brain. These claims might be based on assumptions about information processing (that text is limited because it must be reduced to a one-dimensional stream) or about encoding, as in a few claims that human memory for images is faster or more accurate than memory for words.

Theme: Mental models. In contrast to the Human visual capabilities theme, which is for ideas about how humans process images initially, this theme holds claims about the nature of human thought. In some cases, respondents claimed on the basis of introspection that they "think in pictures". This opens up the long-running debate extending both to ancient philosophy and to modern folk psychology about whether human thought is pictorial or propositional. Nevertheless, the assumption of visual mental models clearly influences some attitudes to visual programming.

In similar cases, respondents claimed that the visual representations in VPLs are closer to their mental concepts and, thus, that they can express their thoughts more easily, without having to "translate" into text. Finally, in a few cases, respondents expressed the belief that VPLs encouraged creativity and imagination. This belief is perhaps based on assumptions about the left-brain versus right-brain dichotomy. Alternatively, this belief might reflect the privileged status granted to the visual arts in a textually literate society.

Theme: Preference and affect. The same bias regarding the visual arts might explain why a few respondents simply stated that visual programs are nicer or more fun than textual equivalents. In an age when people would rather watch a movie than read a book (as stated in Project 1), respondents in this theme see something fundamentally attractive about working with diagrams.

4.4. Comparisons of VPLs to the physical world and to natural languages

Theme: Applying real world experience. This theme holds opinions about the role of direct manipulation and physical metaphors in visual programming. The principles of direct manipulation and the desktop metaphor were undoubtedly familiar to all of the respondents. VPLs can quite reasonably be described as just another aspect of the direct manipulation revolution in human-computer interaction - in fact, VPLs and the desktop metaphor can be traced to common roots in the work of Ivan Sutherland (1963), Alan Kay and David Canfield Smith (1977). Some respondents focused on this topic with comments that the direct manipulation in visual programming exploits a highly developed human skill. In a similar vein, other respondents noted other ways that VPLs allow programmers to apply experience gained in the physical world to the computer domain. The two most frequent examples here were observations that VPLs based on flowchart notation are familiar to programmers and that people familiar with electronic circuit diagrams will find LabVIEW easy to learn. Such opinions may explain why a given VPL might be more easily learned, a fact which returns us to the Learnability theme in a circular progression that typifies the many complex interrelationships between elements of this thematic structure.

Theme: Making the abstract concrete. This theme holds opinions about the abstractness or concreteness of visual code in which the respondent seemed to be referring to some property of the abstract nature of thought required to design and create programs. (In contrast, when respondents used the word "abstraction" to seemingly refer to the issue of language level, these cases were grouped into the theme Power and computability). The relationship between abstraction and pictures is another philosophical minefield. Some respondents claimed that programming concepts are inherently not visual, while others considered that concrete pictures help us to understand abstract principles by expressing them using physical metaphors.

Theme: Comparisons to natural language. This rather theoretical category contains opinions that VPLs are desirable by virtue of their similarity to pictographic or hieroglyphic natural languages (e.g., Chinese). In other cases, respondents described VPLs as "universal" because, due to their freedom from natural language, VPLs could be more easily understood internationally. In similar comparisons, a few respondents reasoned that VPLs may be appropriate for illiterate people and for people with language handicaps.

4.5. Miscellaneous observations

A very few statements in the survey responses could not be placed into any of the themes that are described above. In some of these cases, a respondent expressed a unique opinion that did not touch upon any of the defined themes. Additionally, there were a few more statements that defied interpretation. These were all grouped into a Miscellaneous theme.

5. Open-format responses: coding method

After defining the semantic themes, we proceeded to analyze all the text from Project 1, all responses to the open-format questions in Project 2, and all open-format responses from a random sample of 40 of the questionnaires returned in Project 3. All of this text was broken down into its constituent opinions, each of which was categorized into one of the semantic themes and was coded as to whether it was a positive or negative opinion of visual programming (as mentioned above, there were also a few cases in which the opinion was conditional or unclear).

The current analysis focuses solely on opinions about visual programming. Consequently, a slight variation in the coding procedure was applied to the text gathered in Project 3. First, note that, unlike in Project 2, the Project 3 questionnaire contains more than one open-format question. To analyze these opinions, therefore, we analyzed the survey "as a whole", such that an opinion given about visual programming was counted in the themes, regardless of whether that respondent gave that opinion in response to Question 5 as opposed to, say, Question 10. Second, Project 3 was also distinct in that its questionnaire queried respondents about all aspects of the LabVIEW language, some of which are not inherent to visual programming. Thus, for the current analysis, we coded only those statements in which it was very clear that the respondent was referring to a visual aspect of LabVIEW. Given this conservative coding policy, the findings described in Section 6 are, therefore, much more likely to reflect accurate opinions about the visual programming offered in LabVIEW. As for the remainder of the opinions expressed in Project 3, we will present these in a later report, one that will include the semantic analysis of the entire 227 data set, for both the VP and non-VP opinions.

The allocation of opinions to themes was not straightforward. When forming the thematic structure, we discussed the interpretation of each theme at length, in order to clarify the coding criteria that would be applied. Despite this precaution, initial samples of the statements showed poor inter-coder reliability. Thus, both authors reviewed all coding decisions until agreement was reached in all cases.

After the coding step was completed, the final step was to calculate three measures for each theme: the number of respondents who gave a positive opinion falling within the theme; the number of respondents who gave a negative opinion falling within the theme; and the number of respondents who gave opinions that were conditional or neutral. Note that these measures count the number of distinct respondents expressing positive (or negative) opinions in that theme, not the total number of distinct opinions. If one respondent made multiple statements falling into the same theme, this contributed a count of one to the tally for that theme. Moreover, there is a possible intersection between the sets of respondents making positive, negative and conditional tallies for a given theme; some respondents expressed both positive and negative opinions within the same theme, thus contributing to both tallies.

6. Open-format responses: results and discussion

 

Figure 1 - Positive versus negative opinions expressed in each project

Figure 1 summarizes the total positive/negative tallies for all of the themes used in the semantic analysis of all three projects. In total, 131 opinions were tallied from Project 1, 162 from Project 2 and 154 from Project 3; Figure 1 expresses these as a percentage of the of the respondents who expressed any opinion. Unsurprisingly, the vast majority of the opinions expressed by the VP researchers in Project 1 were positive. This is understandable, given that these researchers have chosen to work in the area of visual programming. However, as argued by Blackwell (1996), their lack of empirical support illustrates the evidence problem in the VP field. In contrast to Project 1 respondents, a smaller fraction of those in Project 2 expressed enthusiasm about the advantages of VP, perhaps reflecting a general skepticism regarding unfamiliar programming approaches. Even Project 3 respondents, who were very favorable toward LabVIEW's VPL (as measured in the statistical analysis of their numerical ratings) made a higher percentage of negative observations than the VP researchers.

The remainder of this section presents the tallies for each of the main15 themes as a fraction of the total number of opinions expressed in each group. These graphs group the themes according to the structure used in Section 4, with the exception that opinions falling into the Miscellaneous theme were not considered.

 

Figure 2 - Practical impact of visual programming

General impact. All respondent groups made observations about whether VPLs are easy to use. Those with a commitment to VPLs - the Project 1 researchers, and the Project 3 LabVIEW programmers - agreed that they were, in fact, easy to use. Project 2 respondents were divided, however. Although recognizing that ease of use was the main objective of visual programming, half of the relevant respondents stated that this goal would not be achieved - that VPLs would make no difference to users, or that they would even be more difficult to use.

Learnability. Amongst both the VP researchers and LabVIEW programmers, ease of learning was a cited benefit of VPLs. This benefit was not anticipated by Project 2 respondents; in the few cases in which they did mention the learning phase, they were more apt to state the time that would be required to learn a VPL.

The area in which LabVIEW programmers did express reservations about the ease of learning LabVIEW concerns the issue of LabVIEW's use of the dataflow paradigm. Whereas the issue of language paradigm might not, at first glance, seem to be a VP issue (for example, to design an empirical study of a VPL based on the control-flow paradigm, one can devise a comparable textual notation that supports the same language semantics), the situation becomes cloudy when considering the dataflow paradigm. This problem arises because, although a textual dataflow notation can theoretically be devised, dataflow languages are, in practice, always designed using visual representation. Several respondents in Project 3 mentioned experiencing difficulty in learning and applying dataflow principles and even linked this difficulty to their substantial prior experience using non-dataflow languages. Because of the uncertainty about how these claims pertain to the issue of visualness, we simply note the possible relation in this paper. The relevant opinions have not been included in the tallies given in Figure 2; they will be discussed more fully in the later report of the Project 3 data

Productivity. The productivity implications of VPLs attracted a higher percentage of interest in Projects 2 and 3 than in Project 1. In the long term, productivity improvements are the only justification for any investment in new programming techniques. Therefore, interest in productivity may increase within the VP community, as more researchers attempt to measure productivity changes (as, for example, in Baroth and Hartsough's LabVIEW study (1995)).

Both Project 1 and 2 respondents agreed that VPLs can increase the amount of code that a programmer could produce, but they differed in opinion regarding the impact on the productivity of the overall project. The LabVIEW programmers reported benefits in the design, debugging and maintenance phases of software development. The Project 2 programmers believed the reverse - that the advantages of visual programming would be found only during coding and that these other phases would see no change or even decreased productivity.

 

Figure 3 - Comparisons of visual and textual code (first half)

Readability. Respondents in all three projects regularly noted that an advantage of VPLs over text was the way in which the structure of the program was clarified. The researchers in Project 1 were particularly aware of specific notational properties in their languages - the way that relationships which are implicit in text are made explicit by graphical features such as connections between nodes. Project 2 respondents discussed relationships in general, but without reference to syntactic features (unsurprisingly, as they were not as familiar with such features). The LabVIEW programmers, on the other hand, were more likely to discuss the way in which they could perceive some kind of Gestalt view of their programs - the "big picture". Finally, the LabVIEW programmers were also more likely to describe the negative aspects of LabVIEW's readability; four of them stated that it is easy to produce messy, cluttered code - a variety of "spaghetti code" that is much closer in appearance to a bowl of spaghetti than the textual programs which historically gave rise to the idiom.

Documentation effects. A few respondents in each project referred specifically to the role that visual representations can play as a documentation and communication medium. In fact, the Project 1 tally includes Baroth and Hartsough's opinion, taken from their observational study (Baroth & Hartsough, 1995) that the graphical syntax of LabVIEW facilitates communication between software developers and their customers.

Syntax reduction. One of the least controversial effects of visual programming is the way in which it obviates the need for purely syntactic elements of textual programming languages, such as the semi-colons and parentheses that cause so many syntax errors. This aspect of VPLs was not uppermost in the minds of the VP researchers, however. Rather, it was the Project 2 and 3 programmers who mentioned the promise of release from syntax twice as often as the other groups. In addition, two LabVIEW programmers observed that VPLs may bring new syntactic annoyances of its own (they cited the process of wiring their diagrams, especially in large programs).

 

Figure 4 - Comparisons of visual and textual code (second half)

Modular and object-oriented programming. A few Project 2 and 3 respondents noted that visual programming encourages modular programming. One explanation for these remarks may come from the object-oriented craze; Project 2 was conducted during the height of the object-oriented revolution in commercial programming, and some of these respondents seemed to view the topic of the questionnaire as simply yet another new kind of object-oriented programming tool. Alternatively, similar opinions may also have appeared in the VP literature since the time that the Project 1 data was collected; for example, see Burnett, Goldberg & Lewis' edited collection that links visual programming to object-oriented programming (Burnett, Goldberg & Lewis, 1995).

Retention of text. The few relevant respondents from all three Projects appeared to be comfortable with the idea that text plays an important role in a well-developed VPL. A few Project 2 respondents seemed to find it hard to believe that an arbitrary computational operation could be represented graphically; these respondents stated that there would always be some level where the visual representation would be inadequate and, thus, that the programmer would have to return to a textual language.

Power and Computability. One of the concerns of Project 2 respondents was that high-level VPLs might deny them access to the low-level facilities of the machine that are so important in PC programming. This has often been a matter of concern with new generations of programming language, and it was most uniformly seen as a disadvantage by Project 2 respondents. Neither researchers or LabVIEW programmers appeared to be concerned about this issue.

Figure 5 - Cognitive processes in visual programming

Human visual capabilities. The VP researchers regularly stated that justification for their work comes from the specialized capabilities of the human visual system - that VPLs will exploit these neurophysiological capacities in a more effective way than text does. Project 2 and 3 respondents rarely broached this topic. In the few cases in which they did, they did not necessarily agree with the VP researchers. As one LabVIEW programmer stated "the human brain is massively parallel but basically operates in a linear fashion. G is parallel, but not in the least bit linear."

Mental models. In contrast, all three groups expressed opinions either that the designs they construct in their minds are in some sense pictorial or that VPLs are closer in nature to their thoughts than are textual languages. All three groups regularly claimed that VPLs allow them to copy ideas more directly from their internal mental model. These opinions are probably influenced by the predominance of diagrammatic notations for software design, but it is noteworthy that programmers consider their internal representation to hold more in common with these diagrammatic notations than with the text that they eventually write. This introspective intuition has also been noted in a study by Petre and Blackwell (1997).

Preference and affect. Project 1 researchers stated that visual programming would be popular simply because people would enjoy using them. Several LabVIEW programmers said that they think using LabVIEW is fun, but since they did not always explicitly attribute this to the visualness of LabVIEW, only a few were counted in this theme. Some Project 2 respondents claimed that they would not enjoy using VPLs at all.

 

Figure 6 - Comparisons of VPLs to the physical world and to natural languages

Applying real world experience. One way of presenting abstraction is to do it in terms of real-world concepts that are already familiar to the programmer. This technique was not only recommended by a number of VP researchers, but also, in fact, the approach taken in the design of LabVIEW. Several LabVIEW users considered that their previous experience in engineering or electronic circuit-building had been very important to them in learning the wiring metaphor that is central to LabVIEW.

Making the abstract concrete. One of the central justifications for visual programming that is presented by VP researchers is the way in which VPLs make abstract concepts easier to understand by presenting them in the form of concrete images. This is an interesting research question in cognitive psychology (and is currently being investigated experimentally by the second author); it was hardly mentioned by respondents in the other two surveys.

Comparisons to natural language. The Project 1 researchers asserted the more theoretical claims involving the comparison of VPLs to human languages; these arguments were very rarely made by respondents in Projects 2 and 3.

7. Numerical ratings analysis: results and discussion

This section compares the numerical ratings given in response to the questionnaires of Projects 2 and 3. The questions asking for numerical assessments of language features were designed with the objective of statistical analysis; this section presents statistical comparisons between the Project 2 and Project 3 groups, as well as between the relative differences in individual ratings given to different features by each respondent.

The Project 3 respondents broke down as follows. The vast majority of these respondents described themselves as professional programmers or end users: the totals of the user types were

132 professional programmers, 75 part-time/end-user, 14 computer science academics and 6 'other' which did not fit into any of the defined options. Figure 7 shows their reported amount of overall programming experience and LabVIEW programming experience.

 

Figure 7 - Amount of programming experience claimed by Project 3 respondents

7.1. Ratings of graphics versus text

Respondents to the Project 2 and Project 3 surveys both compared the relative advantages of visual and textual programming languages on five different scales: power, ease of writing programs, readability of programs, ease of modifying programs, and how enjoyable the language is to use. In conducting a multivariate analysis of variance (MANOVA), we found that there were clear differences between the respondents in Project 2 and those in Project 3. Most notably, the LabVIEW programmers of Project 3 were far more generous in assessing the advantages of graphical languages over text languages.

The LabVIEW programmers rated LabVIEW as superior to text in all respects, whereas Project 2 respondents expected that VPLs would actually be less powerful (F=139.2, p<.001), less readable (F=27.3, p<.001) and less enjoyable to use (F=65.1, p<.001) than the textual languages that they were familiar with. These negative opinions of the Project 2 respondents may reflect a general skepticism amongst programmers towards any new programming tool rather than an informed opinion.

This possibility is supported by an analysis of the amount of programming experience of the Project 3 respondents. In testing whether there was any correlation between the ratings that respondents gave and the amount of programming experience that they had, we found that positive opinions of LabVIEW were correlated with the amount of experience that a respondent had of LabVIEW (r=.35, p<.001), but not with their general programming experience. Conversely, positive opinions of textual programming languages were correlated with the amount of general programming experience that the respondent had, but not with the amount of LabVIEW experience (r=.34, p<.001).

These findings may present some cause for concern, regarding the degree to which programmers are able to be objective about the tools that they use. Empirical studies often ask programmers to give subjective opinions of new programming techniques or environments. Our findings may reflect the fact that programmers will generally be in favor of what they know best, regardless of the benefits that might be available. Alternatively, these findings may be explained by the argument that programmers will use the longest the tools that are most beneficial.

However, an effect consistent with the former interpretation was apparent in one subgroup of Project 3 - those who were not familiar with any textual programming language (entering "guess" as instructed in Question 9) when giving their ratings of LabVIEW compared to textual languages. These respondents gave significantly poorer ratings to textual languages, especially regarding the power of text relative to LabVIEW (F=32.8, p<.001) and the ways in which constructs such as repetition and conditional constructs are created (F=18.6, p<.001 and F=13.9, p<.001 respectively).

For this reason, those respondents who entered "guess" for this question have been excluded from further analyses regarding the relative ratings of textual languages and LabVIEW. We also tested for any apparent bias resulting from the type of user (academic, end user or professional programmer) or from prior experience with specific textual languages. Neither of these factors had any significant effect on the ratings that were given.

7.2. Specific advantages of graphics

A factor analysis of the relative ratings found three principal components. The largest factor was simply the base rating level used by each respondent in interpreting the scales. The second factor expressed their relative assessment of the textual language versus the graphical language. All five rating scales were positively correlated with this factor for rating of graphics, and negatively correlated in the rating of text. The final factor indicated that subjects grouped "usability" aspects of the language separately from "computational" aspects. The three usability aspects were ease of reading, writing and modifying programs in the language. The computational aspects were power, expression of repetition and expression of conditional control (although Project 2 did not include the latter two). There was also one rating that was grouped differently in factor analyses of Project 2 and Project 3. The rating scale for enjoyability of the programming language was grouped with the computational properties by Project 2 respondents, but with usability by the Project 3 respondents.

Usability and computational ratings are most strongly represented by the "ease of writing" and "power" responses, respectively. In the Project 2 sample, these ratings are consistent with the skepticism expressed in the open-format responses. Project 2 respondents accepted that VPLs might make programs easier to write, awarding a mean advantage to graphics over text of +0.76, but believed that VPLs would be significantly less powerful - a mean reduction of -1.48 over a 6 point rating scale (t = 7.12, p<.001).

We also found a significant difference between the ratings that Project 3 respondents gave to ease of writing - +1.57 - and power - +0.42 (t = 7.70, p<.001). The relative difference between the two ratings is far greater, however, in Project 2 than in Project 3 (F=12.3, p<.001). Unlike the general level of rating given to LabVIEW by Project 3 respondents, this distinction between usability and computational properties is unaffected by the amount of experience that respondents had with LabVIEW.

Within the general usability grouping, both Project 2 respondents and Project 3 respondents agreed that VPLs improve the ease of writing programs more than the ease of reading them. The mean rated improvements in writing and reading are 1.55 and 1.25 respectively in Project 3, a significant difference (t = 2.90, p<.005). In Project 2, readability is thought to be reduced in VPLs by comparison to text - a difference of -0.29, while the assessment of writeability improves by 0.77 (t = 3.95, p<.001).

7.3. Testing LabVIEW hypotheses

While we view this survey as largely exploratory, two questions arising from the earlier LabVIEW studies can be investigated on the basis of the results from Project 3. The experimental work by Green et al. (Green et. al., 1991; Green & Petre, 1992) found no benefits resulting from LabVIEW's visual notations for conditional logic. The Project 3 questionnaire therefore asked respondents to assess two aspects of computational expression: the representation of the control structures for repetition and for conditional logic. We found that those respondents who reported more experience with general programming than they did with LabVIEW considered that LabVIEW was a little poorer (-0.11) than textual languages in expressing repetition, but that the difference between LabVIEW and other languages was greater in the case of conditional logic (-0.42, t = 2.70, p<.01). This finding is consistent with the Green et. al. finding that the representation of conditional logic is a weak point in the LabVIEW notation.

Second, Baroth and Hartsough's observational study (Baroth & Hartsough, 1995) does not allow attribution of the productivity increases to any particular aspect of LabVIEW. We suspected that the degree to which LabVIEW fosters reuse by providing LabVIEW customers with repositories of pre-fabricated software components may explain much of this productivity improvement. Given the extent of available LabVIEW code libraries, programming in LabVIEW may largely consist of a search phase (in which the programmer looks for a software component that closely matches his or her needs), followed by a customization phase. This situation could account for LabVIEW's success, both with trained programmers and end users. An end user might be empowered by LabVIEW programming precisely because LabVIEW obviates the need for much programming. Thus, we hypothesized that, even if the LabVIEW programmers rate the visual aspects of G as advantageous, they will rate the importance of the reusability afforded by LabVIEW more highly than they will rate the visual aspects of G. This hypothesis was not supported by the ratings given by the respondents in Question 6. In fact, the mean rating given for the value of the virtual instrument library was 5.29, while the rating for the value of the graphical language was 5.43 (t = 2.16, p<.05).

8. Conclusions

The three projects described in this paper comprise a thorough survey of academia and industry expect VPLs to affect programming. We have found dramatic differences between the three communities sampled. Academic researchers often have ambitious theories regarding the influence that new programming languages can exert on the mental processes of the programmer. Professional programmers, whether or not they are familiar with VPLs, tend to see the advantages of new languages in different terms - they are more concerned with the potential improvements in productivity that arise from straightforward usability issues, rather than from theories of cognition.

The professional programmers exhibit an preference for the tools that they have had most experience of using. We recognize that this might produce significant biases when programmers are questioned about the value of their tools. These biases can even extend to significant skepticism about the advantages of new techniques, whether or not the programmer fully understands the technique being described. In the course of this project, we, therefore, made frequent allowance for prejudice arising from ignorance - both in the case of respondents in Project 2 who were not familiar with VPLs, and respondents in Project 3 who were not familiar with textual ones.

Our findings are able to throw a little light on earlier empirical studies of LabVIEW, showing that the experienced LabVIEW programmers did (when specifically prompted) admit to the weaknesses of their tool as observed in the study by Green et. al. Equally as interesting is the result that the LabVIEW programmers rated the visual representation of LabVIEW as more of an advantage than its reusability facilities.

Regarding our plans for ongoing research, this paper represents the starting point for a complete analysis of the experiences of LabVIEW programmers, based on the whole of the sample population from Project 3. The current paper has concentrated on comparing LabVIEW programmer's opinions regarding visual programming to those of the researchers and other professional programmers. The next phase of the study will move on to investigate all aspects of the LabVIEW programmers' experience with this leading tool.

9. Acknowledgements

Our gratitude goes to Doug Fisher, Thomas Green and Laura Novick for their helpful feedback on our work. In addition, we thank National Instruments, particularly Lisa Wells, for their assistance in publicizing the Project 3 survey and in providing prizes as an incentive to respondents.

Alan Blackwell's research is funded by a collaborative studentship from the Medical Research Council and Hitachi Europe Ltd. He is grateful to the Advanced Software Centre of Hitachi Europe for their support.

10. References

Baroth, E. & Hartsough, C. (1995). Visual programming in the real world. In. M. Burnett, A. Goldberg & T. Lewis (Eds.), Visual Object-Oriented Programming: Concepts and Environments, pp. 21-42. Manning Publications Co., Greenwich, CT.

Burnett, M., Goldberg, A. & Lewis T. (Eds.) (1995). Visual Object-Oriented Programming: Concepts and Environments. Manning Publications Co., Greenwich, CT.

Blackwell, A.F. (1996). Metacognitive Theories of Visual Programming: What do we think we are doing? In Proc. IEEE Workshop on Visual Languages, VL'96.

Green, T.R.G. (1989). Cognitive dimensions of notations. In A. Sutcliffe & I. Macaulay (Eds.) People and Computers V. Cambridge University Press.

Green, T.R.G. & Petre M. (1992). When visual programs are harder to read than textual programs. Proceedings of the Sixth European Conference on Cognitive Ergonomics (ECCE 6), pp. 167-180.

Green, T.R.G. & Petre M. (1996). Usability analysis of visual programming environments: a 'cognitive dimensions' approach. Journal of Visual Languages and Computing, 7(2), 131-174.

Green, T.R.G., Petre, M. & Bellamy, R.K.E. (1991). Comprehensibility of visual and textual programs: A test of superlativism against the 'match-mismatch' conjecture. In J. Koenemann-Belliveau, T.G. Moher, S.P. Robertson (Eds), Empirical Studies of Programmers: Fourth Workshop, Norwood, NJ: Ablex.

Petre, M. & Blackwell, A.F. (1997). A glimpse of expert programmers' mental imagery. Empirical Studies of Programmers: Seventh Workshop.

Smith, D.C. (1977). Pygmalion: A computer program to model and simulate creative thought. Boston, MA: Birkhäuser.

Sutherland, I.B. (1963). SKETCHPAD, a man-machine graphical communication system. In Proc. Spring Joint Computer Conference, pp. 329-346.

Whitley, K.N. (1997). Visual programming languages and the empirical evidence for and against. Journal of Visual Languages and Computing, 8(1), 109-142.

 

Appendix A - Project 2 Questionnaire

Graphical Programming - a 3-minute questionnaire

This survey is part of a non-commercial research project on the psychology of programming, aiming to help design future generations of programming tools. We would be very grateful if you could spend a few minutes to tell us what you think about graphical programming languages and whether they could help in your work. ALL RESPONSES WILL BE CONFIDENTIAL.

 

1: How much experience of programming do you have? Circle A, B or C as appropriate

A) I am (or have been) a professional programmer
B) Programming is (or has been) just a part of my job
C) I do (or have done) programming at home, but not at work
 

2: Do you have any experience of a graphical programming language, where the programmer does almost all programming by manipulating diagrams instead of typing text? Please note that this definition does not include tools like Visual Basic, where the program logic is created using a text language.

A) I have used a graphical programming language
B) I have seen a graphical programming language
C) I have read about a graphical programming language
D) I have no experience of graphical programming languages
(If A, B or C:) Which one(s)? ________________________________________
 

3: We would like you to choose a standard text language that you frequently use and tell us how you think programming in a graphical language would compare to it. If you want to make comparisons to a real graphical language that you know, please tell us the name, otherwise write 'Guess'.

Write the name of the text language you choose: _______________________________________
Write the name of the graphical language you choose (or 'guess'): _______________________

Now please circle the numbers that you think best describe your beliefs or experience. We have left room for you to enter some other rating points if you wish to.

 

.

My text language is:

 .

.

I think graphical programming is/will be:

.

hard to write

1 2 3 4 5 6

easy to write

hard to write

1 2 3 4 5 6

easy to write

weak

1 2 3 4 5 6

powerful

weak

1 2 3 4 5 6

powerful

irritating

1 2 3 4 5 6

enjoyable

irritating

1 2 3 4 5 6

enjoyable

unreadable

1 2 3 4 5 6

readable

unreadable

1 2 3 4 5 6

readable

hard to change

1 2 3 4 5 6

easy to change

hard to change

1 2 3 4 5 6

easy to change

.

1 2 3 4 5 6

.

.

1 2 3 4 5 6

.

.

1 2 3 4 5 6

.

.

1 2 3 4 5 6

.

.

1 2 3 4 5 6

.

.

1 2 3 4 5 6

.

4: Please explain in a few sentences how you think a graphical programming language might make a difference to the "brain-work" involved in programming. Continue on the back if you wish to.

Thank you for your help! To save you having to post this back to us, you can just drop it in the box marked "Graphical Programming Survey" at the EXE magazine stand in the exhibition hall. Alternatively, send it to either of the addresses below, from which you can also learn more about this research, or volunteer to take part in future studies.

Appendix B - Project 3 questionnaire

Here is the Web-based survey for Project 3.

 


Click to return to Alan Blackwell's home page.