Panelists: Dennis Brylow (Marquette University), Tony Hosking (Purdue), Lenny Pitt (UIUC), Bruce Weide (Ohio State)
Traditional introductory programming courses emphasize a step-by-step accumulation of skills in the basic building blocks of programs – variables, expressions, assignment statements, subroutines. This programming in the small emphasizes control structures over abstraction, utilizing simple explicitly programmed data structures such as arrays, lists, etc. In contrast, programming in the large emphasizes factoring programs into manageable pieces, initially as subroutines, but eventually as packages and abstract data types. Ultimately, this means making best use of rich pre-defined libraries. This is especially important nowadays with languages like Java that come with a large, standard API. Using the standard API well may be more important (because it results in readable, understandable, programs) than programming in the small. Thus, there is a tension between teaching programming in the large as the act of creating abstractions versus programming in the large as the act of choosing well from predefined abstractions. This tension is as old as the hills – top-down versus bottom-up design, etc. – but the tension is an important one when it comes to educating non-CS majors to think computationally, especially when they may eventually spend more time in specifying their application needs (for consumption by programmers) rather than actually coding them. Should we devote more time to teaching model-driven than the nuts and bolts of nitty-gritty programming?
Panelists: Ruth Chabay (Physics, NC State), Michael Coen (CS & Medicine, Wisconsin), Chris Hoffmann (CS, Purdue & Rosen Center for Advanced Computing)
The panel will discuss the increased role of visualization and data analysis in teaching science and doing science research as well as the interplay between computing and science. Questions addressed include how scientists can effectively interact with huge and complex data sets, how data analysis and data management should be taught to students, and how visualization and computing can be used to teach scientific principles.
Panelists: Chris Hoffmann (Purdue Computer Science/Rosen Center for Advanced Computing), Amar Kumar (Eli Lilly), Bob Zigon (Beckman Coulter)
Questions addressed by the panel members include: What kinds of computational thinking abilities are required by scientists in industry? What do scientists need to know about computer science? What is the skill set needed and expected from the future scientists? How permeable are the boundaries of the disciplines, and do they require a broad interdisciplinary basis? The panel will address the view that many scientists are now using computers to solve scientific problems and that understanding how computations are performed is becoming fundamental to scientific research.
Now that we have seen what we’re all doing, what’s next? How can we share our individual results with one another? Can we narrow our individual efforts, but gain through collective efforts? This session is a roundtable discussion to gather ideas on how to continue the collaboration begun at this workshop.
Starter ideas include: creating a mailing list (“listserv”), newsgroup, RSS feed; creating/linking web pages, blog sites, or social networking groups; establishing a presence on SourceForge or other code sharing site; using a wiki to develop course materials, projects, problem sets, and exams; sharing classroom experiences (examples, techniques/technologies); collecting and reviewing text books, reference books, and other materials (“best practices”).