As of 2016_07_27 I haven't written a single line of Component Pascal in my whole life. I just downloaded the BlackBox, executed it with wine on openSUSE Linux, read some introductory parts of the tutorial, surfed around at different Pascal and Oberon related pages to make sense of the Pascal-and-alike eco-system. That is to say, on this forum here I have the benefit of seeing the whole BlackBox and Component Pascal story with fresh sight. The goal of my current comment is to give some feedback.
It seems to me that the whole set of the Oberon/Pascal/A2 software projects seems to be a typical academic mess that is not at all specific to the institution called "ETH Zurich". I have seen similar mess in both, the University of Tartu and the Tallinn University of Technology. By mess I mean the fact that web pages are broken, project web pages, for example,
http://www.oberon.ethz.ch/
are not maintained. At best there are some corners that seems to be more maintained, like the
blackboxframework.org
projectoberon.com
freepascal.org
http://mseide-msegui.sourceforge.net/
lazarus-ide.org
pilotlogic.com (the open source Pascal IDE and library named CodeTyphon)
Unlike the Free Pascal echosystem, the "new Oberon", Component Pascal, seems to suffer heavily from the inability of its developers, including the highly honored academics, including the Niklaus Wirth, to notice that
TOTALLY INDEPENDENT OF THE PROGRAMMING LANGUAGE
there exists the issue of
DOMAIN SPECIFIC DATA ENTRY.
For example, it takes some work to enter the rules for doing matrix multiplication, text searching, reading text in from different encodings, describing physics for game engines, etc. In practice the nice rules from university level study books are not good enough, because they have to be transformed to make them more efficient to execute on real hardware, but that's just optimization. Optimization is a lot of work, but I say that even without any optimization it's a lot of work to describe all those rules in any general programming language. That explains, why the old, archaic, Fortran numeric libraries are still in use in 2016: it takes a lot of domain specific knowledge and a lot of work to describe the domain specific rules. The result is that for historic reasons the best domain experts tend to prefer different programming languages and if a freelancer like me, who works on real-life-problems, wants to create a program from software components that best reflect the domain specific knowledge, the freelancer (or corporate developer) has to use multiple programming languages in the same application.
If my software were to use multiple programming languages, then the inter-operability of the programming languages is very important. As of 2016_07 the only connecting layer between different programming languages that I'm aware of, is the operating system. The Java Virtual Machine, .NET CLR, etc. do not really allow different programming language implementations to exchange data without some operating system dependent hacks that use files, sockets, messages, etc. May be I'm mistaken, but the way I currently understand the BlackBox 2016_07 implementation is that the BlackBox requires Wine to run on Linux. Wine has pretty huge RAM requirements, effectively making the argument about low resource consumption of the BlackBox irrelevant. I do not like to run the slow Wine on the Raspberry Pi, despite the fact that small computers like the Raspberry Pi, including the
https://getchip.com/
http://pine64.com/
might be exactly the target hardware for BlackBox based solutions. That being said, I do think that the
concepts of the Component Pascal are very attractive and all that hard work that Niklaus Wirth and his colleagues and students have done over the decades deserves not just credit, which is kind of useless, but PRACTICAL ADOPTION. I think that it should not be the case that the grand master, Niklaus Wirth, retires with the knowledge that he has failed
https://youtu.be/xLTUvFboveM?t=47m32s
However, unfortunately the typical attitude of university staff that
"I have completed my academic project, papers have been published, research grant covers only research, not engineering" almost guarantees that the research results will not be adopted outside of the academia, because people outside of the academia do not have the time/money to get to know the research project deliverables in the level of detail that allows them to convert the halve-complete product, the research project deliverables, to a component that can be used in practice, not to mention the duplication of efforts in the form of re-learning everything that the academic researchers have already learned.
(The various numeric calculation libraries are a fine illustration: usable through API by almost anyone, but really hard to implement properly. The same with symbolic calculation software, image processing software, various image/video/sound codecs.)
Another serious mismatch between academic researchers and industrial practitioners is that the
experimental software of computer science researchers does not have to be able to work with data that originates from the past, is encoded by using legacy data formats. Industrial practitioners have to be able to work with real-life-data, which can be decades old, encoded in some really old and archaic data format. The programming languages that were in use during the era, when the data was created/logged/saved have a library for reading and writing the old data formats, but
all newer programming languages are faced with the question, whether it is economically feasible to implement the libraries for reading and writing all of those legacy data formats. A workaround, where a semi-legacy programming language P1 is used for converting the data from format F1 to format F2 and then some other semi-legacy programming language is used for converting the data from format F2 to some more modern format F3, till the data is in new-enough-format to be read by the library of the new programming language, still involves the execution of multiple components that have all been written in different programming languages. In that case there exists a necessity for the software components to exchange data. As I described earlier, the only actually connecting layer that I'm aware of in 2016_07 is the operating system: files, pipes, messages, TCP-IP, shared RAM, etc. I find that a database engine like the SQLite is a nice connection point, because that eliminates the byte and bit endianness issues, specially, if the application runs on multiple machines. On the other hand, the computer science researchers save a lot of time, if they do not have to spend their time on thinking about the legacy data formats.
Grants and glory is awarded for novel mathematical ideas, not backwards compatibility with legacy data formats, so the computer science researchers deliver, what was ordered from them. That is definitely one scheme, how the nice work of computer science researchers gets ignored by the "stupid and backwards" industry.
My recommendation for alleviating the situation is to use an idea from marketing:
the greater the number of use cases, where a product can be used, the greater is the potential market of the product. In terms of developer mind share: to attract more developers, the open source software product must be useful in a greater variety of different kinds of software projects. To make a programming language useful in practice, libraries of other programming languages must be usable with as small amount of work as possible and data exchange between components that have been written in different programming languages must be implementable without "too much" development work.
Thank You for reading my comment.
I hope that I wasn't too harsh. After all, it took a lot of hard work and love to create the Component Pascal, BlackBox, etc. but in the same vein, it would be really sad, if all that hard work got wasted due to a weak strategy.