An interesting article on exception handling and finalization in Java.
Frequently, exceptions are stubbed out and ignored, because the writer of the code did not know how to handle the error (and was going to go back and fix it, one day, but the project manager was breathing down his neck and the release had to go out that afternoon). This is bad, since you then do not know that something has gone awry. On the other hand, if the exception bubbles up the call stack, it may kill the thread, and you may never know that there was an exception.
Looks like an interesting resource, especially if you use Java.
Posted to Software-Eng by Dan Shappir on 5/30/04; 2:15:07 AM
Discuss (4 responses)
Should pointcuts be first class? Read section 3.2 and decide for yourself!
The paper can serve as an AOP tutorial for Scheme-heads, so if you never understood what's all the fuss about, you might want to give it a chance.
The paper provides an implementation based on PLT Scheme. Section 5 explains the basics of PLT's module system and the use of continuation marks.
Posted to Software-Eng by Ehud Lamm on 5/24/04; 3:05:57 AM
Discuss (1 response)
I really hate Software - Practice and Experience since they don't allow authors to provide their papers online (or so it seems). Still, I think this paper may interest several LtU readers, so I'll make an exception and mention it here even though it is not publically available.
The authors show how to decouple the search and balancing stategies so they can be expressed independently of each other, communicating only by basic operations such as rotations.
The paper presents a C++ framework that makes heavy use of templates, and is quite efficient.
Aside from being an interesting exercise in libary design, and being a nice example of templates, the authors use a few interesting programming techniques related to C++ templates that are worth knowing about.
An appendix containing the source code is available online.
Posted to Software-Eng by Ehud Lamm on 5/19/04; 8:23:53 AM
Discuss (3 responses)
Do you agree? If so, what are the implications for programming languages?
pbook is a program for playing with this idea. The original source code is here.
Posted to Software-Eng by Luke Gorrie on 5/16/04; 7:01:06 PM
Discuss (5 responses)
Building distributed applications involves many different concerns, for example application functionality, distribution structure, tolerance for partial failure, and security. Instead of trying to build a generic tool that could handle all these concerns, we will treat each concern in depth (solve it completely) before going to the next. We consider each aspect as a partial specification, and the full specification as the conjunction of all these partial specifications. The crucial property is independence: each new partial specification that is added should not invalidate the previous ones. We have been following this approach for almost a decade now in our work on the Oz language and the Mozart system. In this talk I will give some examples of how it works for real applications and then conclude with some lessons we have learned.
A first lesson is that language design is an important part of the solution:
keeping independence imposes strong conditions on the language.
A second lesson is that to make each aspect work in realistic conditions,
we have to change the application architecture. Simple source
transformations are not enough. In hindsight this is obvious, since we
have no automatic technique for going from specification to program.
A final lesson is regarding the limits of aspect-oriented programming.
Even with the best possible AOP, complexity increases because of
the large number of aspects, each of which requires changes in the
application architecture. At some point we need to go beyond AOP.
We propose a simple and time-honored solution: to find "good enough"
solutions for certain aspects and then to hide them inside abstraction
layers. We finish the talk by summarizing what we have realized in Oz
and Mozart and outlining the large amount of work that still has to be done.
Posted to Software-Eng by Peter Van Roy on 5/15/04; 2:10:56 PM
Discuss (4 responses)
From a programming languages standpoint, I think it interesting that the standards try to define parts of the languages as unacceptable, versus trying to get the compilers to enforce the safety standards:
The standard bans any features of programming languages that are incompletely specified or unspecified. For languages such as C or C++ (which do have unspecified or incompletely specified features) developers must use an acceptable subset of the language such as MISRA C or NRC SafeC. Furthermore, safe programming practices, such as avoiding pointers and global variables, must be included in the coding standard.
Further references can be found at C/C++ Recommendations for IEC 61508 and the MISRA C Guidelines.
(And just how do you avoid pointers in C/C++)
Posted to Software-Eng by Chris Rathman on 5/3/04; 3:35:49 PM
Discuss (2 responses)
The project charter is [to] first identify and describe the open source cross community components needed to create an alternative to Microsoft's XP Stack model for collaborative computing. The second phase is to enlist liaisons to each of the cross platform communities involved, and coordinate the creation of an easy to implement collaborative solution.
We believe that open source communities are currently providing everything the [Windows] user base needs for next generation collaborative computing, without having to undergo the costly migration [i.e. upgrade] to the XP Stack. Open Stack will also be a bridge enabling [graceful migration] to open source friendly platforms such as Linux, Solaris and OS X.
There is a big picture and lots about XML.
Though still a draft, this short tutorial (20 pp.) provides a very nice introduction to CCSL (the Coalgebraic Class Specification Language).
The tutorial is centered around a stack specification in CCSL, that is later compiled to PVS, and analyzed. A stack implementation is provided (i.e., a model), and the consistency of the specification proved.
Posted to Software-Eng by Ehud Lamm on 4/2/04; 5:30:01 AM
Discuss (1 response)
Everyone seems to be getting their licks in on the subject of whether strongly typed languages are a good thing or not. So I thought that I would take my turn, and see if maybe this entry could cause a little less jerking of the knee and a little more firing of the neurons.
Terminology problems, psuedo-scientific analogies, ignorant speculations rather than actual observation, etc. Just what you'd expect from an industry leader writing about "types".
Posted to Software-Eng by Patrick Logan on 2/28/04; 10:53:43 AM
Discuss (12 responses)
hOp is a micro-kernel based on the RTS of GHC. It is meant to enable people to experiment with writing device drivers in Haskell.
IA-32 (Intel 32 bit - x86) only
via haskell cafe
Posted to Software-Eng by andrew cooke on 2/19/04; 2:49:47 PM
Discuss (1 response)
Jon Bentley in Bumper-Sticker Computer Science quotes Dick Sites as saying I'd rather write programs to write programs than write programs.
For some reason I remembered this quote as saying I'd rather write programs to write programs to write programs than write programs to write programs.
But maybe that's just me...
Posted to Software-Eng by Ehud Lamm on 2/11/04; 9:22:00 AM
Discuss (5 responses)
One of the things that interested me was the token-rename patch. It's a patch that says something like 'change all the names "Foo" to "Bar" in this section of code', so the patch includes semantic information, not just a textual diff.
I've been using automatic refactoring with IntelliJ IDEA for a while now, so the first thing that struck me was that revision control systems of the future might in fact deal more with refactorings than with diffs. I can imagine a patch like "Perform Extract Method on method Foo from line 12 to 30, and rename the new 'bar' paramter to 'baz'." Food for thought.
Theory of patches The development of a simplified theory of patches is what originally motivated me to create darcs. This patch formalism means that darcs patches have a set of properties, which make possible manipulations that couldn't be done in other revision control systems. First, every patch is invertible. Secondly, sequential patches (i.e. patches that are created in sequence, one after the other) can be reordered, although this reordering can fail, which means the second patch is dependent on the first. Thirdly, patches which are in parallel (i.e. both patches were created by modifying identical trees) can be merged, and the result of a set of merges is independent of the order in which the merges are performed. This last property is critical to darcs' philosophy, as it means that a particular version of a source tree is fully defined by the list of patches that are in it. i.e. there is no issue regarding the order in which merges are performed. For a more thorough discussion of darcs' theory of patches, see Appendix A.
It is important to realize that different languages will impose different granularity constraints on collaborative IDEs, since the definition of "local change" is language dependent.
Will we some day have collaborative IDEs that continously monitor contract changes, and notify the programmer when he makes unwarranted assumptions and when the changes he is making to his code are going to cause breakage elsewhere?
Posted to Software-Eng by Ehud Lamm on 2/6/04; 5:56:18 AM
Discuss (1 response)
Sina is a concurrent object-oriented programming language, and it is the first language to adopt the Composition Filters Object Model (CFOM). The CFOM is an extension to the object-oriented model and can express a number of concepts in a reusable and extensible way.
The Sina language has been developed by the TRESE project as an expressive vehicle for the Composition Filters Object Model. The TRESE project (part of the SETI Group at the Computer Science Department of the University of Twente, The Netherlands) performs activities related to research on compositional object technology.
This work offers at least two points of interest, concurrency and generalizations of design-by-contract. Good documentation is the 1995 master's thesis by Piet S. Koopmans. I am not sure exactly where the work stands today, but ASoC seems the new buzzword. Composition Filters per se have seen passing mention on LtU. There are overlaps with aspects: links, helpful slides. There has been some Smalltalk development. For the hurried perhaps this basic rundown will work. I liked the paper "Object-Oriented Composition is Tangled" from Klaus Ostermann.
Two points may be of interest to the LtU community.
Jon quotes Kingsley Idehen saying You should never find yourself locked into any database vendor, programming language vendor, operating system vendor, or business application vendor.... But programming languages shouldn't really be in this list. A programming language should have a complete and accurate public standard, eliminating the danger of vendor lock-in. This is important for many reasons (getting a language 'right' is awfully hard, a public standard goes a long way in ensuring that design errors are caught in time), among them the issues Jon discusses. Think about this a bit, and then turn your mind to VB if you will...
The second issue is more intersting technically. It is about writing database access libraries (APIs) using mainstream programming languages. This is an issue I mentioned here quite a few times.
Simply hosting (i.e., embedding SQL) as text inside programs is bad for a mlutitude of reasons (think compile time checking), but it's the standard approach, and we have years of experience using it.
The crucial problem is that if you want to create a database access layer as part of the design of your project, you don't have many choices for the design of the interface between your access layer (which hides the actual SQL etc.) and the rest of your system. You can use explicit iterators or cursors, which hurt the elegance of the code, return full tables which hurts performance (and when optimized so as to produce small tables, hurts flexibility and modifiability), or implement a DSL like protocol between the access layer and its clients complicating the design and requiring dynamic SQL.
Some programming languages have facilities that can ease this sort of programming problem which is quite common. I am thinking about things like laziness (and generators) and macros. Perhaps you have more suggestions.
It would be intersting to hear Erik's thougts on this matter given his unique perspective. Noel can also enlighten us, based on his experience with SchemeQL.
Posted to Software-Eng by Ehud Lamm on 1/23/04; 2:50:20 PM
Discuss (7 responses)
Some chapters have obvious LtU-appeal. Minilanguages, reminds us of the Unix (Bell Labs?) school of language design and the many DSLs that we take for granted today. Languages discusses popular alternatives for Unix programming and gives specific evaluations of the languages themselves and of their popularity and areas of application.
More subtle connections can be found throughout the text. I found the description of qualities like transparency and discoverability particularly insightful because they capture in essense what I enjoy so much about using certain programming languages. I suspect they also hint at what I miss the most in studying the formal and semantics-focused areas of programming language research.
The section on Multiprogramming also warmed my Erlang-programmer heart.
I bought this book after reading Ralph Johnson's recommendation.
Posted to Software-Eng by Luke Gorrie on 1/21/04; 9:36:33 AM
Discuss (10 responses)
Moreover, a recent post provides a nice example of the tradeoffs related to using a DSL vs. opting for a more traditional design approach.
FxCop supports writing custom rules. Custom Rules can be written by writing a class in any.NET language. This class has to implement one of the interfaces FxCop provides via its SDK. After compiling the class into an assembly, FxCop can load the custom rule by pointing it to that assembly...FxCop doesn't really support editing the default rules right now.
Extensibility was an obvious design goal. Deciding to support extensions by providing hooks for user supplied code (what old timers called user exits, and others call frameworks etc.) is a standard (and often useful) design technique.
The price you pay, however, is that the user supplied code is less tightly integrated with your product, and it is harder to support things like editing, tracing etc.
An obvious solution is, of course, to supply a standard rule class, that invokes or implements an interpreter for a rules DSL. Then you can have all the advantages of a DSL, when you need them.
All these are common tricks. I am constantly surprised that only few programmers really master them.
Note: I am not trying to say that the FxCop guys don't know about all this. I just used their new weblog as an excuse to write about this issue...
Posted to Software-Eng by Ehud Lamm on 1/11/04; 2:44:20 PM
Discuss (1 response)
As an embedded systems developer I perk up when real-time needs are addressed (not frequently enough here). The Giotto language is claimed "particularly suitable for safety-critical applications with hard real-time constraints."
Strong type systems ... improve robustness and composability of software. However, type systems today ... talk only about static properties of component interfaces.... Dynamic properties, such as calling conventions that state that one method must be called only if another method has been called, are not expressed in the interface definition of a component, except informally. And most system integration problems today arise because of incompatibilities in the dynamic properties of software components....
While promising methods, such as design by contract, have been explored by the research community, our approach is to focus on those techniques that are formally manipulable, and those that embrace concurrency....
Released version 1.0 of the Ptolemy II software ... includes domains for continuous-time modeling, discrete event modeling, synchronous/periodic modeling (Giotto), finite-state machine modeling, and dataflow modeling, as well as a semantic framework for hierarchically combining domains to get mixed models, including hybrid systems, concurrent state machines, and mixed-signal models.
This time, an attempt to augment the C standard library, so as to provide safer versions of some of the functins that make C such an unsafe language (buffer overflows a case in point).
Enough to fix the core problems with the language? Hardly. A "good enough" engineering tradeoff? You be the judge.
The WG14 work item proposal may also be worth a look.
Posted to Software-Eng by Ehud Lamm on 12/28/03; 2:08:07 PM
Discuss
I wonder what the bias is (towards what is already available in the languages)? Anyway, it seems like an interesting approach - obvious, really, but I don't know of any other similar work. I guess it must happen.
Posted to Software-Eng by andrew cooke on 12/24/03; 1:55:01 PM
Discuss
I am a fan of generic programming, so I should be the first to celebrate papers of this nature. In this case, I am not sure I can.
Perhaps I can't get over the the fact that Ada isn't mentioned (and it was one of the first languages to offer industrial strength generic programming facilities). Perhaps the problem is that I got used to papers with better theoretical grounding. I am not sure.
Be that as it may, the subject is quite important, and seems to grow in importance daily (hooray, hooray). I suggest you read the paper and make up your own minds.
Please share your take on the subject in the discussion group.
Posted to Software-Eng by Ehud Lamm on 12/24/03; 8:03:00 AM
Discuss (4 responses)
Patrick posted some thoughts on this matter.
What do others think? Is this a sign of language deficiency, or a is this the kind of "real complexity" software engineers really need to handle?
Posted to Software-Eng by Ehud Lamm on 12/24/03; 4:02:05 AM
Discuss (1 response)
The author spends a fair fraction of the essay discussing various programming languages (and even manages to mention Ada), making this on topic for LtU.
Personally, I think reading code, and writing readable code, is much more than just a language issue. It is firstly about mastering programming techniques, and about developing a sense of style. It is about understanding micro-decisions (like naming), module design, and finally about understanding software architecture and system design.
These skills can only be developed with practice, and require years of experience. But that's not enough.
Reading code, and learning to appreciate what great code looks like, is essential.
That's why I like to conduct code reading workshops. Not all programmers have day to day access to excellent code. Not everyone is going to look for good examples on the net, and beginners are not always able to distinguish good code code from bad.
You cannot become a great author without reading. Nor can you become a great programmer without reading great code.
One final thought. Read the great code you can find, even if it is not written in the language you currently use. The experience will make you a better programmer, whatever the language you use.
Posted to Software-Eng by Ehud Lamm on 12/15/03; 7:33:09 AM
Discuss (27 responses)
The fact that discussing this topic is such a favored pastime is a bit worrying, I guess. But what the heck...
Posted to Software-Eng by Ehud Lamm on 12/15/03; 7:19:54 AM
Discuss
Posted to Software-Eng by Ehud Lamm on 12/9/03; 9:19:16 PM
Discuss
An interesting exercise in language design? You decide.
The nesC team chose an evolutionary rather than a revolutionary design approah, and as a result nesC is an extension of (a subset of) C.
The nesC homepage is here.
Posted to Software-Eng by Ehud Lamm on 12/7/03; 6:13:48 AM
Discuss
Now, Meyer has language design philosophy does he not? Namely, there are three things important for programming language: SW quality, SW quality and SW quality...
I am not sure this interview would be the best way to adovcate this cause, but I agree with the notion that managing complexity is at the root of software engineering. That's why this question from Bill is right on the money:
It sounds to me that you're talking about two things: getting rid of unnecessary complexity and dealing with inherent complexity. I can see that tools, such as object-oriented techniques and languages, can help us deal with inherent complexity. But how can tools help us get rid of self-imposed complexity? What did you mean by "getting at the simplicity behind the complexity?"
Only it sounds to me he got it backwards: Tools can help with unnecessary complexity. Inherent complexity requires insight...
Posted to Software-Eng by Ehud Lamm on 11/3/03; 3:12:24 AM
Discuss (15 responses)
The original proposal is apparently here. The link above is to a detailed response by Kiczales, that even includes some good links.
Posted to Software-Eng by Ehud Lamm on 10/20/03; 4:49:23 AM
Discuss (12 responses)
Generic programming consists of increasing the expressiveness of programs by allowing a wider variety of kinds of parameter than is usual. The most popular instance of this scheme is the C++ Standard Template Library. Datatype-generic programming is another instance, in which the parameters take the form of datatypes. We argue that datatype-generic programming is sufficient to express essentially all the genericity found in the Standard Template Library, and to capture the abstractions motivating many design patterns. Moreover, datatype-generic programming is a precisely-defined notion with a rigorous mathematical foundation, in contrast to generic programming in general and the C++ template mechanism in particular, and thereby offers the prospect of better static checking and a greater ability to reason about generic programs.
This paper lays the background for the Datatype Generic Programming (DGP) project funded by the UK's Engineering and Physical Sciences Research Council (a call for PhD candidates was circulated a few months back).
This line of work is closely related to my areas of interest, specifically abstraction mechanisms in programming languages.
DGP is essentialy an outgrowth of typeful programming of the variety found in Haskell. As such, it is likely to give rise to the same concerns people have with advanced (and static) type systems. Be that as it may, I am quite sure that moving towards more generic, while still logically sound, software artefacts is the right way to go.
Posted to Software-Eng by Ehud Lamm on 8/23/03; 2:23:22 PM
Discuss (4 responses)
We present a new technique for abstracting programs to models suitable for state space exploration (e.g., using a model checker). This abstraction technique is based on a region type system in which regions are interpreted as sets of values. A major benefit of the region abstraction is that it soundly supports aggressive state space reduction and state size reduction in the presence of aliasing without relying on either imprecise global static alias analysis or a large pointer slice of the source program prior to model construction. Region types themselves contain only locally sound alias information; however, our models are globally sound by computing dynamically with region names at model checking time. This technique effectively shifts part of the alias analysis to the model checker and leads to state space reduction as well as enhanced model precision. Region types also provide a flexible framework for adjusting the tradeoff between model precision and performance. We have used these techniques to implement a region-based model compiler for C#.
So you start with a Java-like languasge, with reference semantics, and then work like mad to handle aliasing. Wouldn't choosing a better designed language like Ada be a more productive choice? I think we know how people choose the languages they do research on.
Be that as it may, this is an interesting bit of work that combines techniques from type theory and static analysis with model checking. We are likely to see more and more research combining these radically different approaches. Of course, we will also eventually combine more expressive contracts (as types, perhaps) and blame analysis and produce a more comprehensive framework for achieving software reliability.
Posted to Software-Eng by Ehud Lamm on 8/7/03; 1:33:09 AM
Discuss (1 response)
Interestingly enough Jon doesn't stress two things I find to be of fundamental importance: TDD is more appropriate to some programming languages and architectures than to others, and IDE support is quite helpful for xUnit to take hold (he does mention the "green bar" though).
Jon posted a few more bits from his interviews with Ward and Marick to his weblog.
Posted to Software-Eng by Ehud Lamm on 8/4/03; 9:27:37 AM
Discuss (22 responses)
This M.S. thesis from Purdue University was announced on the TYPES forum. I don't have the time to read it at the moment but it sure sounds interesting.
Posted to Software-Eng by Ehud Lamm on 7/26/03; 4:34:55 AM
Discuss
What caught my in the article was the following statement:
It's not surprising that the flaw found its way into Windows Server 2003, said Russ Cooper, an analyst at Reston, Va.-based TruSecure Corp. and moderator of the popular NTBugtraq mailing list. "For all its work, Microsoft knows that solving the buffer-overflow problem is not going to happen," Cooper said. "They can reduce the number, minimize the effects for some services, but [neither] they nor anyone else can get rid of them, no matter what hype is associated with it."
Maybe he ment that solving the buffer-overflow problem within the exisiting Windows code-base is not possible. If he ment it in a general was, do you accept it ? (taking a PL-centric view of course)
Posted to Software-Eng by Dan Shappir on 7/20/03; 2:53:17 AM
Discuss (12 responses)
If, like me, you ponder about the subtle relationship between programming languages and their IDEs, you should take a look at this essay.
But beware: Quite a few of your favorite languages are mercilessly criticized by Jef...
Posted to Software-Eng by Ehud Lamm on 7/12/03; 12:58:34 AM
Discuss (31 responses)
Technically speaking, Flow Caml is also one of the first real-size implementations of a programming language equipped with a type system that features simultaneously subtyping, ML polymorphism and full type inference.
Here's a very clear explanation of the basic concept.
Section 2.1 of the manual contains several simple examples that will be helpful for understanding how such a system can be used.
Posted to Software-Eng by Ehud Lamm on 7/2/03; 4:52:36 AM
Discuss
Every popular methodology has a graphical modelling language whcih presents various pictorial representations of a system. FAD's modelling language provides the typical elements of functional programming, types and functions, plus elements to support modular development such as modules, subsystems and two forms of signature which specify and interface or a behavioural requirement...
Not sure this is exactly relevant here - hope it's OK. From c.l.f (posted by Hal Daume)
Posted to Software-Eng by andrew cooke on 6/30/03; 10:59:40 AM
Discuss (2 responses)
Asynchronous Transfer of Control ("ATC") is a transfer of control within a thread, triggered not by the thread itself but rather from some external source such as another thread or an interrupt handler. ATC is useful for several purposes; e.g. expressing common idioms such as timeouts and thread termination, and reducing the latency for responses to events. However, ATC presents significant issues semantically, methodologically, and implementationally. This paper describes the approaches to ATC taken by Ada and the Real-Time Specification for Java [2, 3], and compares them with respect to safety, programming style / expressive power, and implementability / latency / efficiency.
One of the interesting papers in Ada-Europe this year. ATC essentialy means that control can jump inside a thread, without the thread's knowledge or preparation. This, of course, has immediate safety implications. Programming languages must provide ways for the programmer to ensure the consistency of his system.
ATC is a difficult issue in language design, and both Ada and the RTSJ have made serious attempts to provide workable solutions. They share a common philosophy in opting for safety as the most important objective, and thus in defining ATC to be deferred in certain regions that must be executed to completion. They offer roughly comparable expressive power, but they differ significantly in how the mechanism is realized and in the resulting programming style.
Posted to Software-Eng by Ehud Lamm on 6/22/03; 3:05:09 AM
Discuss (3 responses)
One of the keynotes at Ada-Europe this year was given by Mira Mezini and was concerned with Aspect Oriented Programming.
We have discussed AOP many times before, but it seems like CAESAR has some interesting features that are worth discussing (first class cross-cutting concerns, anyone?)
Posted to Software-Eng by Ehud Lamm on 6/21/03; 8:58:16 AM
Discuss (2 responses)
Grounding macros in the more general framework of multi-stage computation (e.g, MetaML) helps express their semantics, and eliminates the need to explicitly deal with things like variable capture.
Aside from presenting this approach, the paper can also serve as a nice summary of the main design dimensions involved in designing a macro system: analytic vs. generative macros; string vs. AST oriented macros; typing etc.
Posted to Software-Eng by Ehud Lamm on 6/10/03; 11:41:55 PM
Discuss
... we have learned something about how to do the job more precisely, by writing more precise specifications, and by showing more precisely that an implementation meets its specification. Methods for doing this are of both intellectual and practical interest. I will explain the most useful such method and illustrate it with two examples
I can't find a transcript of the lecture but the slides contain some very useful information. Cool quote: "Any idea is better when made recursive (Randell)"
A decade earlier, Lampson wrote Hints for Computer System Design, which is an interesting roundup of some engineering essentials.
Posted to Software-Eng by Manuel Simoni on 6/8/03; 3:55:00 PM
Discuss
Aspect Oriented Programming has its roots in Open Implementations / Meta Object Protocols. A MOP lets clients override parts of an implementation to better suit their needs (e.g. memory allocation strategy for objects.) The talk proposes that instead of hiding implementation details, control over "mapping dilemmas" should given to application programmers.
To recover the overhead introduced by opening the innards of the system, Kiczales proposes partial evaluation and code memoization at runtime.
Kiczales' advise to framework/library implementors is "if you can't conquer, at least divide".
Listener: "Is there a restriction that says that meta code isn't allowed to change the behavior of interface code?" Kiczales: "It's really nice if you don't make that restriction, but that freaks people out." (laughter)
A related paper was previously mentioned, and there's also a newer talk on AOP. (Also: AOP discussion on LtU.)
Posted to Software-Eng by Manuel Simoni on 6/8/03; 9:34:20 AM
Discuss (2 responses)
A useful analysis of the differences between synchronic destructors (as in C++) and asynchronic finalizers (like those in Java).
The paper can also serve as nice introduction to destructors and finalizers in general. Reading the appendix is perhaps the fastest way to appreciate the fundamental argument presented in the paper.
Posted to Software-Eng by Ehud Lamm on 5/21/03; 4:18:09 AM
Discuss (1 response)
Why amusing? First, because of the debate concerning what modules are all about, and second because modules aren't really a new idea, are they?
Posted to Software-Eng by Ehud Lamm on 5/20/03; 10:06:20 AM
Discuss (3 responses)
This sounds cool, but I wasn't able to find more information (or a download). More information would be welcome.
It seems like there's a trend in programming language work these days to concentrate on Excel. I think this actually a good idea.
The obvious next step is to produce a framework for easy building of such tools. After that, someone should really work on supporting the implementation of this sort of tool in the spreadsheet itself. This shouldn't be to hard to do.
Posted to Software-Eng by Ehud Lamm on 5/4/03; 5:02:58 AM
Discuss (1 response)
My advisor and I are working on a tool to automatically find bugs in Java programs. One of the interesting results of our work is that we've found hundreds of real bugs in production code using extremely simple techniques. We believe that automated tools, if used more widely, could prevent a lot of bugs from making it in to production systems.
Looks very interesting. I think I'll give it a go.
Posted to Software-Eng by Dan Shappir on 4/30/03; 12:23:03 AM
Discuss (4 responses)
Recursive Recovery (RR) aims to develop an easy-to-use, effective technique for systems to recover from failure. It is part of the Recovery Oriented Computing (ROC) approach to making software more dependable.
Also Crash-Only Software.
Links from an email by Scott Lystig Fritchie to the Erlang mailing list (Erlang does this kind of thing).
Posted to Software-Eng by andrew cooke on 4/24/03; 12:16:18 PM
Discuss
I would contend that any type of consistent notation is better than none. The interesting issues here IMO are when type annotation gets in the way of abstraction and differences between notation strategies for distinct PLs.
Posted to Software-Eng by Dan Shappir on 3/23/03; 2:09:59 AM
Discuss (4 responses)
JML (The Java Modeling Language) seems like an interesting notation, but I didn't manage to find detailed information on what tools are currently available (heck, I can invent notations myself, surely having a new notation is not the point).
Posted to Software-Eng by Ehud Lamm on 3/20/03; 2:33:41 PM
Discuss (3 responses)
SPARK is sometimes regarded as being just a subset of Ada with various annotations that you have to write as Ada comments. This is mechanically correct but is not at all the proper view to take. SPARK should be seen as a distinct language in its own right and that is one reason why the title was changed in this edition.
Spark is being actively used in industry (e.g., avionics), so it is of particular interest.
Here's an overview of the language and tools.
Posted to Software-Eng by Ehud Lamm on 3/12/03; 3:19:21 AM
Discuss
Checked exceptions, declared as part of subroutine signatures, are appealing. They make the abstraction interface more readable, and ensure error handling consistency.
However, they introduce coupling into the system, can be seen to break encapsualtion, and can cause a ripple effect of changes during maintenance.
This article weighs the pros and cons.
Posted to Software-Eng by Ehud Lamm on 3/11/03; 3:53:32 AM
Discuss (2 responses)
Not different from what you'd expect to find, if you are familiar with languages with exceptions.
But there are some nuances due to erlang's problem domain and concurrency semantics.
Posted to Software-Eng by Ehud Lamm on 3/10/03; 3:08:44 AM
Discuss (5 responses)
Another take on the Should scripting languages be used for industrial strength projects question, inspired in part by the Guido van Rossum interview that was discussed here at length last week.
Things become more interesting when you try to enumerate programming language facilities that would help cope with a "world of distributed services in constant flux."
Perhaps the first thing that comes to mind is choosing interpreted over compiled languages. Naturally, this distinction is a bit naive, virtual machine bytecodes can enable mobile code just as well (if not better) than high level languages. But the classic model of compiling to platform specific object code is quite problematic, even if you take into account things like DLLs.
Next comes the issue of stand alone programs versus the language-as-runtime-environment approach exemplified by Smalltalk. The latter being a bit more natural for plug in components.
For foreign components to interact you will usually want some form of reflection or introspection capabilites in the language, and perhaps a way to access meta-data about components.
To manage the resulting menagerie of components, you would want the language to provide support for versioning, and hopefully some mechanism to ensure the integrity of the system, and that the components of the system are not inconsistent (think types).
Since failures are likely to happen from time to time, it is wise to require some contract support from the language, ideally support for sound behaviorial contracts and blame analysis.
Good language support for this sort of programming scenario will move the programming language into the land of languages-as-operating systems, since managing resources and cleanup for components is almost a must.
Smaller features can also come in handy. Things like automatic code updates from the internet, and possibly hot code replacement (think erlang).
All these issues were mentioned here in the past, with links to research papers exploring their implementation and interaction. I see no reason why many of these things can't be done manually, but programming language support would reduce programming effort and increase reliability.
Posted to Software-Eng by Ehud Lamm on 2/7/03; 1:55:04 PM
Discuss (3 responses)
From the abstract:
This paper explores the idea that redundant operations, like type errors, commonly flag correctness errors. We experimentally test this idea by writing and applying four redundancy checkers to the Linux operating system, finding many errors. We then use these errors to demonstrate that redundancies, even when harmless, strongly correlate with the presence of traditional hard errors (e.g., null pointer deferences, unreleased locks). Finally we show that flagging redundant operations gives a way to make specifications "fail stop" by detecting dangerous omisssions.
Seen on Slashdot
QuickCheck is very simple, which means it is easy to understand and use. The ICFP paper is well worth your time, and discusses several questions that are relevant for anyone using or designing testing tools (with useful and up to date references).
However, I don't think that the design is very Haskell specific. I think you can achieve most of the design goals with static polymorphism (templating) etc.
Posted to Software-Eng by Ehud Lamm on 12/24/02; 6:07:08 AM
Discuss
The system has been fully implemented by an available prototype, which takes a type environment and a set of class declarations as input and produces standard Java bytecode for those classes.
Posted to Software-Eng by Ehud Lamm on 12/22/02; 5:24:06 AM
Discuss
Aspect Oriented Programming is a programming technique that make it possible to express crosscutting concerns in a modular way. C# is a new language that has been designed for the Microsoft .Net Framework. AspectC# is a tool that enables AOP constructs within the C# language. The AOP constructs are the same as that of AspectJ.
Currently in alpha.
This essay really needs debunking, and who is more suited for this task than LtU readers?
It's not that I have anything against scripting languages, indeed I think they have their uses. But, to pick one example from this essay, I think that weak-typing and polymorphic typing do not mean the same thing...
Posted to Software-Eng by Ehud Lamm on 12/18/02; 3:47:30 PM
Discuss (44 responses)
This is nice. When I show these kinds of errors, though, I prefer to show the mismatches by drawing arrows between the the relevant program points. This is perhaps a bit harder to do on the web, but will make the demo more compelling...
Posted to Software-Eng by Ehud Lamm on 11/27/02; 4:39:48 AM
Discuss
I think we need better programming language support for components, rather than having separate composition languages, but that's just me.
Still, keep in mind that the best tools we have to specify behaviour are inside expressive programming languages. Specifically, the best tool for the job is a good type system, that allows you to specify important logical properties. Indeed, isn't this the reason why functional programming matters?
Posted to Software-Eng by Ehud Lamm on 11/20/02; 12:42:23 PM
Discuss (7 responses)
We had many more technical discussions of AOP. These slides are a nice introduction to the motivation behind aspects.
Posted to Software-Eng by Ehud Lamm on 10/30/02; 6:57:57 AM
Discuss (2 responses)
...which is more valuable to producing robust code, testing or static analysis and proofs? ... The main argument in favor of static analysis (including type checking) is that the results hold for all possible runs of the program, whereas passing unit tests only guarantee that the tested components hold specifically for the inputs they were tested with (on the platform they were tested on)...The main argument in favor of unit testing is that it is much more tractable. You can test many constraints of a program that are far beyond the reach of contemporary static-analysis tools... Each tool has a major strength that can be particularly useful to complement the other tool: Unit tests are able to show the common paths of execution, to show how a program behaves. Analysis tools are able to check the coverage that unit tests provide.
The commercial
Clover
"is a code coverage tool for Java. It discovers sections of
code that are not being executed. This is used to determine where
tests are not adequately exercising the code."
(example:
Unit Test coverage report for the Xerces XML parser)
JUnitDoclet and
JUnit test case Builder are similar open source tools
still in alpha.
Daikon
infers program invariants from unit tests.
(Diagnosing Java code: Unit tests and automated code analysis working together ,
Eric E. Allen)
Posted to Software-Eng by jon fernquest on 10/19/02; 3:16:49 AM
Discuss
The page includes lecture slides, and links to relevant papers. The papers themselves make a great reading list about module system from a type theoretic point of view. Most of the issues discussed are important for any discussion of module systems, type centric or otherwise.
A personal favorite is Mitchell and Plotkin's Abstract Types Have Existential Type. Felleisen and Flatt's units paper is also worth checking.
Notice that the LtU department Software-Eng is for items relating to the role of programming languages in software engineering. This includes things like module systems, DbC etc.
Posted to Software-Eng by Ehud Lamm on 10/17/02; 2:41:04 PM
Discuss (3 responses)
According to its proponents, open source style software development has the capacity to compete successfully, and perhaps in many cases displace, traditional commercial development methods. In order to begin investigating such claims, we examine data from two major open source projects, the Apache web server and the Mozilla browser. By using email archives of source code change history and problem reports we quantify aspects of developer participation, core team size, code ownership, productivity, defect density, and problem resolution intervals for these OSS projects. We develop several hypotheses by comparing the Apache project with several commercial projects. We then test and refine several of these hypotheses, based on an analysis of Mozilla data. We conclude with thoughts about the prospects for high-performance commercial/open source process hybrids.
The
first lecture of MIT's software engineering lab
[
Prior LTU Posting]
gives some useful background on the Netscape source code
that would later be used in Mozilla.
(see "The Netscape Story", p. 11).
(Two Case Studies of Open Source Software Development: Apache and Mozilla,
A. Mockus, R. Fielding, J. Herbsleb,
January 2002)
Posted to Software-Eng by jon fernquest on 10/10/02; 1:43:42 AM
Discuss (4 responses)
Does the controversial Sapir-Whorf Hypothesis or theory of "linguistic relativity" that has been proposed for human languages hold for programming languages and the programmers who use them?
Linguistic relativity is the "proposal that the particular language one speaks influences the way one thinks about reality"... According to the theory, "people who speak different languages perceive and think about the world quite differently"...
"The Sapir-Whorf Hypothesis is the most famous. Benjamin Whorf's hypothesis had two interpretations, the first is called determinism ["people's thoughts are determined by the categories made available by their language"], and the second is labeled relativism ["differences among languages cause differences in the thought of their speakers"..... "
Do the language features of the programming languages a programmer uses as a computer science student determine the ideas that a programmer is able to use and understand as a professional programmer? [ Sapir-Whorf Hypothesis on Google]
The best explanation I've seen thus far about the PLT module system.
The paper esplains the design forces very well, but also includes code snippets that show how modules and macros interact in MzScheme.
Other module systems for Scheme are discussed.
The Bibliography of Scheme-related Research has large and interesting section about modules. I'll try to report on several of the other papers, unless some other LtU contributor preempts me
Posted to Software-Eng by Ehud Lamm on 9/28/02; 4:36:49 AM
Discuss
Oleg suggested this as another application of module systems to the composition of operating system kernels.
The most important part of this work, if I understand correctly, is trying to identify minimal abstractions that support the proposed model of component composition.
Posted to Software-Eng by Ehud Lamm on 9/23/02; 3:10:44 AM
Discuss
Why are "languages designed for smart people" (LFSPs) so much more fun to program in than "languages designed for the masses" (LFMs)? .... LFSPs have a much greater support for abstraction, and in particular for defining your own abstractions, than LFMs. This is not accidental; LFMs deliberately restrict the abstractive power of the language, because of the feeling that users "can't handle" that much power... This is reassuring to Joe Average, because he knows that he isn't going to see any code he can't understand. It is reassuring to Joe Boss, because he knows that he can always fire you and hire another programmer to maintain and extend your code. But it is incredibly frustrating to Joe Wizard Hacker, because he knows that his design can be made N times more general and more abstract but the language forces him to do the Same Old Thing again and again.
Can extensions to mainstream languages that introduce innovative features like multi-dispatch or functional language features in an incremental (controllable, gradual) fashion reduce this fear?
Component composition with Knit thus acts like aspect weaving: component interfaces determine the 屠oin points? for weaving, while components (some of which may be automatically generated) implement aspects. Knit is not limited to the construction of low-level software, and to the degree that a set of components exposes fine-grained relationships, Knit provides the benefits of aspect-oriented programming within its component model.
Among the recent discussions of AOP, this seems apporpriate.
The basic notion used in this paper is that of a unit. A unit is a component or model definition, defining imports, exports, and dependencies.
The main claim of the paper is that units are a proper foundation for developing the cross-cutting facets of a system in a modular fashion. The interfaces between between units are the AOP join points.
Posted to Software-Eng by Ehud Lamm on 9/19/02; 1:35:33 PM
Discuss
..humans organize knowledge into sets of interrelated concepts ... concepts can be modeled as objects quite adequately...objects can be organized into inheritance and containment hierarchies which capture the relationships between the corresponding concepts ... A domain is defined as an area of knowledge or activity characterized by a set of concepts and terminology understood by practitioners in that area [UML1.0], i.e. an area of expertise. ... 1. A concern is a domain used as a decomposition criterion for a system or another domain with that concern. Concerns can be used during analysis, design, implementation, and refactoring. 2. An aspect is a partial specification of a concept or a set of concepts with respect to a concern. ... The concerns used in decomposition should yield aspects which are loosely coupled, (nearly) orthogonal, and highly declarative.
At the end of the paper a table compares different domain engineering technologies. (Beyond Objects: Generative Programming, Czarnecki, Eisenecker, and Steyaert)
This paper describes an approach called generic programming (see also http://www.cs.utexas.edu/users/novak/autop.html ) to reusable code components. The basic idea is to create generic algorithms, like iterate-accumulate that iterates over a collection accumulating a parameter (very similar to fold), and parameterise data by views, which are light-weight wrappers around data to provide the interfaces expected by the generic algorithms. One assembles a program by combining generic algorithms and views into appropriate combinations. The implementation then compiles the code to a target language (Lisp, C, etc.) using inlining and partial evaluation to remove the abstractions introduced at the generic level.
At first I thought this paper introduced nothing new. Generic programming is something anyone could do in a language that supports some notion of interfaces. Upon later reflection I've decided the generic programming system is useful in two ways: Firstly, it is useful to have a way to explicitly program in different levels (similar to DSLs). Secondly, having guaranteed compiler optimisations gives the freedom to program in a much more abstract style than one might normally.
The Future Work, particularly the idea of encoding constraints beyond
those offered by current type systems, is quite interesting.
Posted to Software-Eng by Noel Welsh on 9/13/02; 3:47:56 AM
Discuss (2 responses)
The most recent presentation (referenced above in the title)
presents Sweet, a "Static Weaving and Editing Tool", an
Eclipse IDE extension.
Sweet creates Aspect code views that
"put fields and the methods that depend on
them in one chunk."
The presentation also includes a critique of the visitor pattern,
an example of adding an optimization with Sweet,
and a brief description of the MJ Compiler (mjc) which
introduces abstract syntax in modular chunks
(type checking, static checking, code generation, optimization,...)
and is used for teaching.
(Perspectives On Software, Andrew P. Black and Mark P. Jones, 2000)
Posted to Software-Eng by jon fernquest on 9/10/02; 12:39:50 PM
Discuss (7 responses)
An interesting exercise in embedding little languages (or are these really frameworks?) in Scheme.
I particularly like the discussion of SQL embedding in sections 3.1 and 3.2. It shows the advnatages that come from thinking at the programming language level.
Posted to Software-Eng by Ehud Lamm on 9/3/02; 3:42:48 AM
Discuss (20 responses)
...there is currently no common accepted foundation available describing how to compare aspect-oriented mechanisms. This makes it hard to determine if a new proposal really supplies a new solution for untangling code which is not already available in any other approach....Moreover, it must not be forgotten that object-oriented programming already provides some techniques for sharing code and hence permits to untangle code in some way.
It's a little hard to untangle the classification itself at times
and they don't tell you how they are going to put their classification to
work in comparing and choosing aspect-oriented mechanisms and approaches,
but the idea of using simple examples of actual code to derive
more precise definitions is admirable.
Relevance to Programming Languages: Attempt to make the definition of "Aspect-Oriented
Programming" more precise.
(A Proposal For Classifying Tangled Code,
Stefan Hanenberg and Rainer Unland)
Posted to Software-Eng by jon fernquest on 8/24/02; 3:22:28 AM
Discuss (2 responses)
Language-Based Information-Flow Security. Andrei Sabelfeld, Andrew C. Myers. IEEE Journal on Selected Areas in Communication (to appear).
Conventional security mechanisms such as access control and encryption do not directly address the enforcement of information-flow policies. Recently, a promising new approach has been developed: the use of programming-language techniques for specifying and enforcing information-flow policies. In this article we survey the past three decades of research on information-flow security, particularly focusing on work that uses static program analysis to enforce information-flow policies. We give a structured view of recent work in the area and identify some important open challenges. (Postscript version)
As always, if you want to extend language semantics, the easiest way is to use the type system
In a security-typed language, the types of program variables and expressions are augmented with annotations that specify policies on the use of the typed data. This means, of course, that these security policies can be enforced at compile time.
Like ordinary type checking, security type checking is also inherently compositional: secure subsystems combine to form a larger secure system as long as the external type signatures of the subsystems agree.
Posted to Software-Eng by Ehud Lamm on 8/21/02; 1:11:33 PM
Discuss
Programming language semantics has lost touch with large groups of potential users. Among the reasons for this unfortunate state of affairs, one stands out. Semantic results are rarely incorporated in practical systems that would help language designers to implement and test a language under development, or assist programmers in answering their questions about the meaning of some language feature not properly documented in the language's reference manual. Nevertheless, such systems are potentially more effective in bringing semantics-based formalisms and techniques to the places they are needed than their dissemination in publications, courses, or even exemplary (but little-used) programming languages.
[
Citeseer]
Posted to Software-Eng by jon fernquest on 8/16/02; 2:55:17 AM
Discuss (1 response)
Panic in corporate development land. The prototype...is scheduled to be presented to upper management in a week and it still takes several minutes to get a query back. Call in the gunslinger. ...The Smalltalk dialect was unfamiliar to me and the problem domain (screen scraping mainframe terminal data and shipping it to a web browser) was alien. But I needed the money... ...I dived into the code with the help of one of the locals. We poked around with the browsers and debugger... By the end of the day we hadn't found much, but we had eliminated a lot of places where the problem wasn't... We didn't make the noon deadline, but we did have it an hour later. ...We made the change there in the debugger and proceeded. Additional testing confirmed that performance went from nasty to acceptable and that there were no obvious harmful side effects.
Lesson: The method of least knowledge. I never did learn much about the system under development and I still know zip about screen scraping mainframe terminal data, but I was able to use the available tools like the debugger to drill into the code where it was possible to see a potential problem. In fact I think that if we had focused more directly on solving the problem from the start rather than on my understanding the system as a whole we might have fixed it the first day.
Why does there always seem to be this tension between
understanding the big picture and solving very specific problems.
Abstractions often seem to bury the details of a design.
I often see criticisms of java programs that use too much abstraction.
What can be done to make drilling down into abstractions
to get the details easier?
The author provides a nice outline of troubleshooting strategies
ranked by the time they consume with ruminations on the tools he has
and the tools he would like to have.
In an ideal world with perfect semantic descriptions of software systems
would this sort of gunslinger or relief pitcher become extinct?
A position paper from the
OOPSLA 2001 Workshop On Software Archeology: Understanding Large Systems.
Posted to Software-Eng by jon fernquest on 8/16/02; 2:53:59 AM
Discuss (4 responses)
Developers working on existing programs repeatedly have to address concerns (features, aspects...) that are not well modularized in the source code comprising a system. In such cases, a developer has to first locate the implementation of the concern in the source code comprising the system, and then document the concern sufficiently to be able to understand it and perform the actual change task. Existing approaches available to help software developers locate and manage scattered concerns use a representation based on lines of source code. Such a line-of-code representation makes it difficult to understand the concern and limits the analysis possibilities. ... ...By visually navigating structural program dependencies through the tool's graphical interface, you can locate the code implementing a concern, and store the result as an abstract representation consisting of building blocks that are easy to manipulate and query.
The statistics associated with text may be continuous, categorical, or binary. For a line in a computer program, when it was written is a continuous statistic, who wrote it is a categorical statistic, and whether or not the line executed during a regression test is a binary statistic.
Seesoft is an instance of
Exploratory Data Analysis,
an approach to statistics devoted to identifying patterns in data.
There is also a
shorter paper
(source of above quotes),
a short online summary,
and a
description
in an online chapter on gui's for information retrieval.
TileBars
are a simpler technique using a grey-scale to represent statistics.
Posted to Software-Eng by jon fernquest on 8/11/02; 2:14:49 AM
Discuss (5 responses)
Subtitled "Assessing the Evidence from Change Management data." The software system studied is a 15 year old telephone switching system to which new features are continually being added. Statistical methods are applied to data from the version management system. An economic model is used. Code decays when it is "more difficult to change then it should be." This excessive difficulty is reflected in three factors: programmer cost, time to make the change, and quality of the software after the change. The last two factors are constrained by deadlines and quality standards, so in the end code decay is reflected in excessive programmer cost to produce code of a given quality by a certain deadline. (102) Empirical findings include:
(1) Increase over time in the number of files touched per change to the code; (2) The decline in modularity of a subsystem of the code; as measured by changes touching multiple modules; (3) Contributions of several factors (notably, frequency and recency of change) to fault rates in modules of code; and (4) That span and size of changes are important predictors (at the feature level) of the effort to implement a change. (110)
Includes a list of "causes of code decay" and "risk factors for code decay" (103) The notions of "fatal code decay" and "perfective maintenance" to prevent code decay are also mentioned. Would functional Erlang fare any better?
A draft book on TDD, by Kent Beck.
Reading this I started to wonder just how different TDD is from classic stepwise refinement, in which you write client code, and only then drill down into subroutines.
Stepwise refinement is about procedural programming (e.g., Pascal or C). eXtreme programming and TDD tend to be more about OOP. Indeed, the essence of the xUnit framework is an object oriented design.
Posted to Software-Eng by Ehud Lamm on 7/31/02; 4:18:02 AM
Discuss (4 responses)
Topics include Abstract Interpretation, Type-Based Program Analysis and Theorem Proving.
Posted to Software-Eng by Ehud Lamm on 7/29/02; 1:30:55 AM
Discuss
(via Slashdot)
"We've experienced enormous interest in Cg since its introduction," said Dan Vivoli, vice president of marketing at NVIDIA. "We're open sourcing this compiler code to further accelerate the transition to an era of advanced real-time effects through Cg."
As reported here before Cg is a C-based programming language specification and implementation that is intended for the fast creation of special effects and real-time cinematic quality experiences on multiple platforms.
Notice the annotated PCC bibliography.
Posted to Software-Eng by Ehud Lamm on 7/21/02; 10:18:41 AM
Discuss (1 response)
Poor buffer handling is implicated in many security issues that involve buffer overruns. The functions defined in Strsafe.h provide additional processing for proper buffer handling in your code. For this reason, they are intended to replace their built-in C/C++ counterparts as well as specific Microsoft® Windows® implementations.
Ever the one to embrace and extend Microsoft now targets the good old C std strings library. This is a minor thing when compared to what they've done to most every other programming language in the context of .NET . Still, it'd be interesting to see if the C community accepts the convention of the HRESULT function return value.
Collaborations are modules that encapsulate code involved in the same operation ina system. They have received increasing attention in software engineering because their separation of behavior simplifies software evolution, configuration, and maintenance. This paper explores the effect of these designs on modular model checking, especiallyon state space sizes and on the need for property decomposition.
Model checking is an attractive technique for verification which is increasingly used in the field of hardware design. Alas, applying model checking to software is problematic beacuse of the so called state explosion problem. Algorithms for model checking are based on the (efficient) exploration of the entire state space of the model, to see that it satisfies the temporal logic properties which serve as the specification. Software systems tend to have a much larger state space than hardware devices, and thus model checkers have a hard time working with them. Modular model checking may help in solving this problem.
This paper presents one approach.
Buzzwords: PLT Scheme is mentioned (of course), and the SPIN model checker.
Posted to Software-Eng by Ehud Lamm on 7/7/02; 10:31:22 AM
Discuss
The programs can be formulated in a small subset of Ada (see syntax). The annotations are preconditions, postconditions, loop invariants, and termination functions for loops. The correctness proofs are based on the weakest precondition approach and other proof rules as described in the Literature.
I played a bit with the online demo. When I finally managed to get it to accept my code, it managed to prove it correct. Still, I am not sure exactly what meta-functions can be used in the assertions (e.g., sum works).
Posted to Software-Eng by Ehud Lamm on 7/3/02; 3:48:27 AM
Discuss
Nothing Beats Sun Labs' "Ace" Technology for Fast Development of Flexible, High-Performance, Enterprise Applications.
Code from specifications (not the first time it's been discussed here, I know, but I wonder how close this is to being released on the market?).
Posted to Software-Eng by andrew cooke on 6/20/02; 10:53:18 AM
Discuss (6 responses)
The C link includes a nice, simple intro to CSP.
(I thought these might be interesting in the light of this post)
Posted to Software-Eng by andrew cooke on 5/27/02; 7:38:39 AM
Discuss (1 response)
Programming language design as Human Engineering.
Interesting discussion on language design. Tucker Taft led the Ada 9X language design team (Ada9X was the code name for the Ada version now known as Ada95).
The discussion of exceptions may be a bit surprising.
Posted to Software-Eng by Ehud Lamm on 4/22/02; 9:08:27 AM
Discuss (1 response)
Interesting LtU discussion. I am still not sure I understand why this is a big deal, but luckily we have some contributors who do.
Posted to Software-Eng by Ehud Lamm on 3/30/02; 1:55:06 AM
Discuss
Sam Ruby writes about creating extensible wire level protocols, but I think the same considerations apply in other cases.
As is well known, some conclude that these considerations imply that next generation languages are going to be dynamically typed. I don't buy this argument, but the issues themsevles cannot be dismissed.
Posted to Software-Eng by Ehud Lamm on 3/21/02; 7:18:10 AM
Discuss (1 response)
Raise your hand of you remember REM (which continues to work in VB.NET BTW). Makes some good points about comments, with some funny samples from the JDK.
Correctness by Construction: Better Can Also Be Cheaper. Peter Amey, Praxis Critical Systems. CrossTalk Magazine, March 2002
For safety and mission critical systems, verification and validation activities frequently dominate development costs, accounting for as much as 80 percent in some cases. There is now compelling evidence that development methods that focus on bug prevention rather than bug detection can both raise quality and save time and money. A recent, large avionics project report ed a fourfold productivity and 10-fold quality improvement by adopting such methods. A key ingredient of correctness by construction is the use of unambiguous programming languages that allow rigorous analysis very early in the development process.
The main focus of the paper is the SPARK Ada toolset, but the paper raises issues that are of general interest. The key notion is that of the benefit of a precise language or language subset. This is important since it allows for tool support.
The paper discusses the building of the avionics of the Lockheed C130J (the Hercules II Airlifter), and the costs of achieving level A DO-178B certification (the DO-178B being the prevalent standard against which avionics software is certified).
Posted to Software-Eng by Ehud Lamm on 3/12/02; 10:44:48 AM
Discuss (3 responses)
Linus on software design, and the lack thereof. Not really about programming languages (ok not at all), but about the design/implementation cycle in general, of which programming languages play an important part.
Lots of effort seems to be geared towards formalism in proving the correctness of programs. Yet, when it comes to comments, there's usually little formal structure. Sure, some languages have ad hoc standards, where certain meta tags are used to extract to an html file, etc... but these are not really part of the formal language definition, and there's no enforcement mechanism to speak of.
Wondering if the language researchers have put any thought into making comments a bit easier to maintain and digest?
Posted to Software-Eng by Chris Rathman on 3/1/02; 3:33:21 PM
Discuss (11 responses)
Scala is a general purpose programming language with a special focus on web services. It combines object-oriented, functional and concurrent elements.
I found the examples document to be quite well written and interesting (though some would object to Quicksort being the first example...)
The rationale document talks briefly about some of the theoretical foundations.
Posted to Software-Eng by Ehud Lamm on 2/4/02; 9:16:12 AM
Discuss (2 responses)
I have a feeling that the wonderful ideas that come from the advocates of the so called scripting languages approach, will have problems scaling up.
Sure, one web services is nice. When you build entire enterprise systems around this architecture, reliability will suddenly become important. Impact analysis (think Y2K scenarios) will become essential. And so on, and so forth.
But maybe I am missing the point. I admit I haven't seen any really convincing approach yet.
Posted to Software-Eng by Ehud Lamm on 1/31/02; 1:59:46 PM
Discuss (5 responses)