In this way, several interesting questions can be answered, such as: Are there grants that have the same prerequisites? How many companies are not eligible for any grant? Are there grants where eligibility also implies other grants? etc.
Prolog is becoming more interesting now that several conforming and free implementations with comprehensive features arise, notably Scryer, Tau and Trealla Prolog as the most recent systems, and interesting bindings for other languages such as ichiban/prolog for Go. On the Datalog side, Google have recently published a Datalog system called Mangle, which supports the Datalog subset of Prolog as a proper subset.
As of version 7, SWI-Prolog is consciously no longer conforming to the Prolog ISO standard in critical aspects, so it is not usable in organizations where standards are of great importance for procurement, warranty and interoperability reasons, and where long-term maintenance and guarantees are required:
Hm, I wouldn't say that conformance to ISO makes a Prolog more "interesting". More useful in certain kinds of applications like the ones you point out, sure.
In my experience, there are two Prolog worlds, and that has been true right from the beginning of Prolog with Kowalski coming from the world of pure mathematical formalisms, and Colmerauer coming from the world of pragmatic, practical programmers. The power of Prolog (no pun intended :) comes from the tension between these two worlds.
If Kowalski had his way, Prolog would probably be a perfect jewel encased in golden amber, completely unusable but a wonder to behold. If Colmerauer had his way, Prolog would be a quick and dirty programming language fit to replace javascript or C. But they worked together, hand-in-hand, and made something bigger and better and more wonderful than either could have imagined [1].
This duality is also a curse, of course, because its critics can attack it from two sides: it's not pure enough for the purists, not dirty enough for the programmers. Half of its critics complain that it's not 100% declarative, the other half complain that it is not "general purpose" (see elsewhere in this thread). Well, it's not 100% declarative because it's got side-effects, but then what use is a programming language without side-effects? One simply ends up with twisted abominations like monads [2]. And I still don't understand what "general puprose" means, if it doesn't describe Prolog.
I think Jan Wielemaker [maintainer of SWI-Prolog] is firmly in the Colmerauer camp, striving to create a language that everybody can use, not just musty academics in their (our!) ivory towers. I have to say I don't always agree with his choices, but I, personally, am infinitely grateful for his oeuvre and I have often considered how little I could have done, for example in my PhD research, without his work.
_________________
[1] To hear the French say it of course, it was Colmerauer and Philippe Roussel who created Prolog and Kowalski was probably only visiting Marseilles for unrelated reasons. See the Prolog article on French wikipedia: https://fr.wikipedia.org/wiki/Prolog
[2] On the other hand, I think we can both agree that the dynamic database is evil.
Very disingenuous not to even mention Kowalski on the FR Wikipedia entry. In contrast, the English version says:
"The language was developed and implemented in Marseille, France, in 1972 by Alain Colmerauer with Philippe Roussel, based on Robert Kowalski's procedural interpretation of Horn clauses at University of Edinburgh."
> If Colmerauer had his way, Prolog would be a quick and dirty programming language fit to replace javascript or C.
No.
There are far too many such allusions in this post that it is difficult to dismantle them all. So I will stick to this one.
Note that Prolog was born exactly in the very moment when Colmerauer understood how to encode grammars. How much is this quick and dirty? Since Robinson there was some unease about how to encode grammars as this (so far) only lead to extremely inefficient proof procedures. (Roughly, any partition of a text had to be analyzed, most of which were finally discarded) Prior to this moment of understanding in the summer of 1972, as is best illustrated by Philippe Roussel's thesis of May 1972, it was not clear which strategy to use.
Quick and dirty features were rather introduced when Prolog was taken from Marseille to Edinburgh. In particular,
1mo, the quite clean error mechanism was replaced by just silent failure. `T =.. L` just failed. Thereby disregarding the observation of Battani and Meloni 1973 that built-in predicates have three possible outcomes: success, failure, and error.
2do, characters were replaced by character codes something any self respecting compiler writer would have never done.
But then, all in all, DEC10 Prolog had so many improvements that helped to spread Prolog.
Errors are absolutely essential to ensure the correctness of pure Prolog programs. It took some more decades to recover from these quick and dirty setbacks.
There is only one single such quick and dirty issue in Prolog I and that is the omission of the occurs check which was present with a flag in Prolog 0 (the first version written in Algol-W). In particular the lame excuse of this omission was started with the Prolog I manual of 1975. It was then mindlessly reiterated time and again to DEC10's and many other systems' manuals. At least, Colmerauer developed thereafter the notion of rational trees present since Prolog II.
It is only now that systems slowly recover from this offering an optional occurs check.
Thanks for trying to correct me, if I'm wrong (but thanks anyway). What I write
is the impression I've formed after reading various sources on the history of
Prolog, over the years. Some of them are even by Kowalski himself and my
impression that he was the "neat" and Colmerauer the "scruffy" comes from his
comments on their relation. I certainly could have misunderstood his comments,
or be misremembering them. Unfortunately it will take me some time to dig out
the specific sources and check my assumptions [1].
One such recent source is this documentary (in French) about Alain Colmerauer by
a lady who shares his surname (a relative, I'm guessing):
I've pointed the link to just before the start of an interview with Kowalski
where he says (in English) what I transfer above, that Colmerauer was more
practically minded and coming from a computer science side point of view while
he (Kowalski) was more theoretically minded and coming from a mathematical logic
point of view, and also that this tension created an "alchemy" that would not
have transpired otherwise. I watched this documentary very recently and I guess
it coloured my comment above.
More generally, it's easy to find discussion of the "pure" subset of Prolog and
about its purported advantages compared to the rest. I'd have to go back to
those textbooks to be sure, but I think I first encountered this idea in The
Art of Prolog (O'Keefe) or "The Craft of Prolog" (Sterling & Shapiro), or
probably both. And I have heard Ivan Bratko's coding style in his Prolog
Programming for AI described as being too imperative and not the recommended
style today [2].
I guess what people mean when they talk about "purity" is the lack of
side-effects, and other concessions to the everyday needs of the working
programmer. I understand the occurs-check flag to be one such concession, for
example [3].
So I don't think that all this is just my imagination. In my experience, there
really is a divide between Prolog users that think of Prolog as a kind of
mathematical notation for computers and smart at the thought of its "dirty"
side-effects, and those who just want to use it as a powerful, and beautiful,
tool. Like myself.
But I should try to be less sentimental in my analysis in the future. Again,
thanks for reminding me to check my assumptions and try to remember where my
knowledge comes from.
_____________
[1] I spent four years during my PhD at Imperial College literally down the
corridor from Kowalski's office, but I never had the guts to go and talk to him
and ask him about Prolog's history, that I am always very curious about. I
regert that now that my PhD is ended and I'm no longer in the same building, or
the same city :/
[2] Which is nonsense. Also Bratko said I'm good with Prolog, so there.
[3] What does the DEC10 Prolog manual say about the occurs check?
In general, we do agree on the observation that Prolog profits from both the theoretical and the practical side. But still even today I have the impression that the highly theoretically leaning side does not appreciate the fundamental contribution errors have on ensuring correctness properties. At least this was my impression of November 10th in Paris.
Just one personal recollection (from memory) which probably also influences my view on the Kowalski-Colmerauer relation. At a META 90 tutorial/talk about meta interpreters, Sterling attributed the origin of meta interpreters to Warren and the DEC10 manual. Kowalski responded (publicly during the talk) that this was well known to Colmerauer, well before Warren ever got into Prolog.
As to the lady you are referring to she is his widow, a linguist.
> ... whereas unification without the occur check is linear on the
size of the smallest of the terms being unified.
This claim is incorrect, it goes far beyond what is possible. To see its incorrectness consider the unification problem X = Y. And just for illustrative purposes, think of them both as being very big and evolved. Now, consider a related unification problem Z-Z = X-Y with Z just a variable. According to above claim, the cost would be now independent of X and Y which cannot be.
I wonder if all that was more of a consideration in an older time, with weaker hardware. Anyway where I come from (ILP research, again) "polynomial time" is synonymous to "efficient" and the bigger problem is proofs failing to terminate.
I might actually try some learning experiments with the occurs check flag set to "true" just to see what happens. I mean, I wonder. I'm so used to leaving the flag to default false that I wonder if I haven't formed a completely wrong model about the execution of Prolog programs. I've been coding in Prolog for more than ten years now and it still catches me off-guard at times.
It's funny but my go-to reference for unification (Foundations of Inductive Logic Programming, 1997) says that the occurs check is crucial for the performance of the algorithm (because without it unification can go on forever, whereas with the occurs check unification will terminate and find an mgu if one exists). That's exactly the opposite argument than the one in the PDP-11 and Sicstus manuals.
I confess it's been a while since I've looked at the matter. Thanks for reminding me to refresh on it. Just in case it comes up in some internet conversation and I come across as clueless :P
>> Just one personal recollection (from memory) which probably also influences my view on the Kowalski-Colmerauer relation. At a META 90 tutorial/talk about meta interpreters, Sterling attributed the origin of meta interpreters to Warren and the DEC10 manual. Kowalski responded (publicly during the talk) that this was well known to Colmerauer, well before Warren ever got into Prolog.
Wow, that's a really interesting historical tidbit. My specific research in ILP is on an approach called "meta-interpretive learning". Doesn't quite roll of the tongue but it's basically what it says on the tin: a Prolog meta-interpreter used for induction (i.e. learning). Without the idea of meta-interpreters that wouldn't have been possible. And meta-interpreters themselves, btw, would not have been possible with strict adherence to logic programming formalisms. What I mean is, Prolog programs can take as arguments Prolog programs so Prolog programs can't be first-order. And they're not! But that opens up entire avenues of possibility, like representing second and higher-order logics.
>> As to the lady you are referring to she is his widow, a linguist.
> the occurs check is crucial for the performance of the algorithm (because without it unification can go on forever, whereas with the occurs check unification will terminate and find an mgu if one exists).
Many current systems perform rational tree unification. Like SICStus, SWI by default, Scryer by default etc. This claim that unification going on forever only holds for rather the minority of systems today.
> Hm, I wouldn’t say that conformance to ISO makes a Prolog more “interesting”. More useful in certain kinds of applications like the ones you point out, sure.
“Interesting” is inherently subjective, and relative specifically to the interests of some specific entity or group. When the Austrian Federal Computing Centre says something is “interesting”, it is shaped by the needs and constraints of the Austrian Federal Computing Centre.
What is interesting to you may be different, because you probably have different interests than the Austrian Federal Computing Centre.
GNU Prolog is an excellent choice for everybody. I once considered switching back and forth to SWI if I ever needed some convenient module GNU lacks. The need never really arose!
In the late 1980's, when I was at college in Sydney, we had a visit from a chap (I think from somewhere in the north of Europe) who'd previously won a tender for, IIRC, a booking / tracking system for a relatively small airline, all to be written in Turbo Prolog.
I recall two key things from his presentation of his final product.
First, he claimed he'd quoted a half the cost/time of a COBOL-based, and a quarter of a C-based, competing tenders -- and that he'd met his deadlines.
Second, the UI was just beautiful - for both aesthetics and performance.
I didn't have much experience in enterprisey software at the time, but what little I had seen was archetypal terminal-style, monochrome, sluggish, far from user-friendly etc. I have some (no doubt unreliable) memories of this thing looking like a 'modern' (say Amiga) GUI app.
Another item in that large category of 'I wish they'd drop these things into the public domain once they retired it from production' as it would probably still make a great demo (though most free Prolog implementations these days wouldn't support (m)any of the pretty Turbo extensions, I'm sure).
My guess is that it was Prolog Development Center (PDC) [1] the company behind Turbo Prolog and Visual Prolog [2]. They built a nice business providing scheduling and planning software to airlines.
They are still around if you want to get in touch.
It could be the flight booking system described in Sictus Prolog's old homepage:
SICStus Prolog is a workhorse in the transport and logistics industries, running systems that handle a third of all airline tickets, and helping railways to operate their trains better.
“Most people probably have used SICStus Prolog without knowing it,” says Mats Carlsson, its lead developer. “One of our customers runs a flight booking system on SICStus which handles nearly a third of all airline tickets in the world.”
Another guess seperate from the sibling could be the Danish company PDC, they are also the developers of Turbo Pascal, and continue to develop it under its new name Visual Pascal.
The Power of Prolog [0] is a fantastic blog/video series covering everything from basic syntax, theoretical basis, modern features and idiomatic constructs.
I highly recommend it if you want to get the gist of Prolog and its modern features.
If you want a tour of Prolog, you can watch the video with the same name [1].
Prolog feels like one of those things I definitely couldn't write a whole app in, but I often wish it was just there in other languages the same way regex is.
I agree that embedding Prolog is a compelling use case. I ported Trealla Prolog to WASM to help the cause. It's super cool being able to store some business rules as a chunk of Prolog text and run them on both the backend and the frontend as-is.
Want to explore your code, maybe test out some quick changes? Prolog comes with a nice REPL. Want to modify the knowledgebase? Add some clauses with assertz/1 and dump out the full program code with listing/0; no need to wrangle the text yourself. Want to make a previously static relationship more dynamic? Just throw in a new rule: as far as Prolog is concerned rules and facts are equal citizens. It's like a database that lets you intermix static and dynamic row definitions anywhere you want. Testing it is easy: Prolog will happily give you every possible result for a general query. Want to generate error messages from chunks of code, or run your queries backwards? Just throw in a little metainterpreter[1].
I think Prolog is the perfect companion language to static and difficult-to-extend languages like Go. I think that if we want Prolog to become more popular, we should focus on a great embedding experience for developers. That's my current goal :)
I went through a struggle trying to implement an MMI for a telecom product using Prolog, it was just not performant enough.
Around the same time, Joe Armstrong of Ericsson was experimenting with Prolog to build a switching system. Erlang came out of that effort, essentially stripping out the backtracking - wish I had thought of that.
Elixir is built on Erlang and is definitely a production language.
None of this should be its own language, I'm not sure why people are still working in that direction when it's been clear for nearly decades that it's barely possible to use as a standalone language. Like if somebody implemented Tensorflow in its own separate weird language and insisted that's the best and only way to use it.
I think people are far too constrained by adhering to what Prolog is standardized as, making sure there's literally no advancement whatsoever. We just need a more practical formal logic library or type or something for your average language without any stupid string definitions or passing literal prolog in for execution. I mean is that too much to ask?
I so wish there was a complete prolog implementation exactly like this in pretty much any language I use. Unfortunately they always end up being woefully incomplete.
> Prolog feels like one of those things I definitely couldn’t write a whole app in, but I often wish it was just there in other languages the same way regex is.
Embedding prolog (or limited versions, like Datalog, or versions of Prolog or Datalog extended with probabilistic, temporal, or other features), or other logic sublanguages like MiniKanren, in other languages isn’t particularly uncommon.
Totally. I wrote a network management system with a prolog rules engine years ago.
It was amazing when my colleague and I figured out how to integrate that system with our Oracle database in real-time. Having a prolog engine embedded that way made it incredibly powerful to manage and alert on pretty complex relationships.
I agree, that's why I made this : https://github.com/smallbard/SharpLogic to embed it in c# (in order to use it as easily as Linq).
It's a work in progress (a very fun one!)
I'd like a spec language which takes constraints and compiles code them into target language. This way, we can design features at a higher level and don't have to get lost in forests of ifs, tree searches, caching mechanisms and multi layer state machines.
One clever hack I've seen is Yield Prolog (https://yieldprolog.sourceforge.net/) which basically translates the querying/backtracking execution model into mainstream language syntax - such as JS, Python or C# - using the `yield` keyword and loops. Supports things like variable unification, too.
Having a compiler that supports multiple full languages with required consistent ABI at those boundaries is something we really could use. Then you could have a prolog layer for things where it makes sense mixed with an Algol or ML or whatever based language for the rest of the logic. Similar to what .NET has done-ish, but wider ranging is what I'd love to see.
Good comments here and the original Reddit thread.
I just wrote Prolog material for a book I am writing: Prolog embedded in Python. I also have an embedded MiniZinc example.
I have infrequently used Prolog over a 40 year period. Most interesting work project was rewriting a Common Lisp IR&D planning system in ExperProlog that supported nice graphics. The personal Prolog projects that most influenced me were using the semantic web library for SWI-Prolog in the early days of the semantic web: the best environment for learning about the SW that I found at the time.
The Quantum Prolog site [1] advertises ISO Prolog for planning, optimization, diagnostics, and complex configuration but also showcases Inductive Logic Programming as a variant of Machine Learning. Agree Prolog should be seen as special-purpose language, but rather for a much larger domain than parsers. Datalog, using a Prolog syntax subset, is a decidable logic fragment for use in theorem provers and is also used as a query language. MiniKanren may be useful as embedded logic solver (I don't know really) and MiniZinc might have its uses, too, but those really don't compare to the portability and size of the Prolog ecosystem IMO.
I've used Prolog since 2008 I think, and I've heard it said that it's not a "general puprose programming language" but I don't understand what that means. Could you explain? What is a "general purpose programming language" and how does Prolog scream it's not one?
I'm asking curiously and I guess I have a weird perspective, having used Prolog for a long time, so I would be grateful for an explanation.
(To clarify, most of my career as a programmer I worked mainly with C# and SQL, so I haven't somehow been hiding away in a secret Prolog dimension. I wish :)
Python is a general purpose language. You can write anything in python, from quick and ditry replacements for bash scripts, to windows desktop applications, to clustered supercomputer code. It also has an library ecosystem to match.
Naturally, the “curse of Turing” means that virtually any programming language can, with enough effort, do anything. I wouldn’t classify Prolog as appropriate language for general purpose vs say Python.
In theory, I could write a replacement for the Linux kernel in machine code or COBOL or even Prolog. This doesn’t mean that it would be wise to do so. (I’ve written code in professional settings in all of these ways.)
Why wouldn't you? "Curse of Turing" usually implies "if you implement a VM capable of running a Python runtime then you can do what Python can do", but Prolog is nothing like that:
> "You can write anything in python, from quick and dirty replacements for bash scripts"
Transparent Inter-Process Communications (TIPC) libraries "a framework for cooperation between federations of trusted peers that are operating as a unit. It was developed by Ericsson AB" - https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...
SWI won't have anything like Python's ecosystem due to many fewer users, but it has a non-toy ecosystem of networking code, image processing, database connecting, more performant datastructures, Protocol Buffers, Unicode, YAML, SGML, XML, Http, zlib, gzip, unit testing, debugging, profiling, FFI, etc.
I mean, you might not want to, but it's not like you have to start from half a dozen Turing Machine instructions and an infinite tape and work your way up from there.
Yes! See https://news.ycombinator.com/item?id=34195543 elsewhere in this thread with a discussion about SWI-Prolog's http libraries used to run their own website, replacing a more traditional LAMP stack.
But, just to clarify, is that what is meant by "general purpose", that a programmer can write anything they want and there are libraries to boot?
The main reason I am very confused about "general purpose" is because, in my experience, Prolog is very much a language where you can write any kind of program you want. And there are libraries applenty in SWI-Prolog, although of course Prolog has a much smaller community than Python so one should not expect quite the same level of support, I guess.
I suppose, why take away those use cases only to later weld them back together with Python, maybe having to do it with a bunch stringly typed ffi calls. Now you have three or four languages/programs/DSLs to deal with, each with impedance mismatches and they can only interact through the central Python program.
Or you can collapse your stack down one technology that integrates it all in such a natural way that you can list recursive parsing and linear constraint optimization in the same breath and still say it's not a general purpose language.
Most programming language are Turing complete and if so they're also general purpose by definition. You can use them to build anything.
That said of course our definition might be different and if you don't have plenty of layers of abstraction that speed your development you won't consider that programming language compared to another.
After all, we are running JavaScript in the servers now and you can find an npm module for nearly everything. Probably someone 20 years ago would've said that "JavaScript is not a general purpose language"
Don't get me wrong, prolog may never be the next JavaScript, but is just fun how people would define something is "general purpose" or not (no hate, just a pure comment seeing things from a different angle)
Turing completeness is an interesting theoretical concept, but it is wholly unsuitable and largely irrelevant when discussing the usefulness of programming languages.
What is "usefulness" then? It's clear that different programming languages are more suitable for different use cases, for example one wouldn't normally script a browser in Lua or use Javascript to write a driver for a hardware device.
That said, if it's about use cases, Prolog is an excellent language for web development and capable of replacing most of the usual mixed OOP and relational stack. For example, the SWI-Prolog website runs on SWI-Prolog's own http libraries.
This page presents a brief history of hosting and developing http://www.swi-prolog.org. The website runs SWI-Prolog and is basically `PlDoc, the SWI-Prolog source documentation system on steroids'. This site replaced a classic Apache server built from static documentation pages, a Twiki-based wiki and Bugzilla. The site was re-implemented as a SWI-Prolog-based HTTP server that extends PlDoc for the following reasons:
1. PlDoc unites the core reference manual with the package documentation and documentation extracted from all loaded sources in a coherent interface. In addition, it provides search, dynamically generated hyperlinks between the various pieces of documentation, and display of the source code. This is far better than the fragmented and not properly interlinked static documentation of the old site.
2. SWI-Prolog comes with extensive libraries for providing web services. Running the website on the system is just the ideal opportunity to test these services and prove to the Prolog community that they are a vital piece of the infrastructure and that we trust them. The server:
welcomes about 4,000 visitors each day, viewing about 30,000 pages (120,000 hits) (2013)
serves about 350 Gb of data per month
serves downloads from static files
serves manual pages from on-the-fly post-processed HTML documents
serves dynamically generated HTML pages from documented source files, coloured source files, etc.
analyses, indexes and presents packs. It also presents the meta-data for pack_install/1
runs a REST API to handle user annotations.
I was just responding regarding using Turing completeness as a gauge of usefulness. Although I don’t know it well at all, I think Prolog is an awesome language and would like to know it more.
"Usefulness" is that intangible quality that some tools have which make people overlook their unfriendly user interface, lacking documentation, brittle and crashy behaviour, inelegant design and push through that. "Uselessness" is that intangible quality or dealbreaker which means people who desire to use a pleasant or elegant or well-documented or stable tool, still cannot or do not, for any technical, practical or social reasons.
Clearly useful/useless varies with person and scenario: mandoline: sometimes useful to a cook; knife: sometimes useful to a lot of people. To me your other comment where you link to the Ackermann-Péter function on Rosettacode in successor arithmetics is the opposite of usefulness. Both versions solve ack(3,1,X) near enough instantly. The first with tabling solves ack(4,1,X) in a couple of seconds, and without tabling it overflows the stack even when I raise the stack limit to 8GB. Your version comments about how clean and elegant it is with Peano's axioms but it also stackoverflows ack(s(s(s(s(0)))), s(0), X) and that's with or without tabling. As far as I'm concerned, neither version is more readable than the other, they're both three lines of dense unhelpfully-named variable shuffling. You've turned something you think is inelegant but which works into something you think is more elegant, but works less well. Pleasing, maybe, but to me it's the opposite of useful.
It's like the people talking of RDF triples; Google them and read:
> "every part of an RDF triple is individually addressable via unique URIs — for example, the statement "Bob knows John" might be represented in RDF as:"
Doesn't that just make you want to die? What could be more 1990s architecture astronauty, academic and impractical than turning "Bob knows John" into that mess? It's from a world of CORBA, SOAP, Jabber, Grid computing, SGML, and write papers about ontologies, not a world where people do useful things.
Or take the typical Prolog example of grandparent/parent/child family tree which might be academically pretty as an example of recursive search, but is basically useless in describing real world families with divorces, remarriages, foster parents, half-siblings, polygamy, unacknowledged offspring, children raised believing their grandmother was their mother and their mother was their sister, and all the rest of it. By the time code is extended like that it's more "useful" and less academically elegant, and by the time you've handled input and output and database formats and error messages and UX you may as well not be writing it in Prolog for all the difference it makes.
Or take the typical Prolog example of linguistic programming making sentences like "the cat sat on the mat" with a grammar which, after you squint a bit, you realise is equivalent to this brute force Python:
for creature in ['cat', 'dog']:
for verb in ['sat', 'lay']:
for article in ['a', 'the']:
for object in ['mat', 'chair']:
print(creature, verb, 'on', article, object)
and that brute forcing all combinations is not especially performant, elegant, or useful, just because the grammar is pretty, and you aren't fooling anyone by writing verb as 'v' and noun as 'nn' and determiner as 'det' to make it look mathematical, and trees don't describe real use of language very well e.g. "Sound, mate - later!", or "okay, you do you".
"Usefulness" then is going in the other direction, it's the things which are inelegant, ugly, clunky, not well founded on beautiful theory, but still compelling enough to be widely used; C and C++ shipping Linux and Windows and macOS and Chrome and FireFox and 3D game engines and JavaScript V8 engine and Java JVM and .Net CLR and SWI Prolog engine and SQLite and SQL Server and PostgreSQL and Oracle and Rust shipping Scryer Prolog, Python or Swift cobbled over TensorFlow shipping workable ML models, Objective-C building most iOS apps, Go shipping Kubernetes, it's Lua being used as an embedded extension language, Perl writing shell scripts, AWK being small and quite low dependency and self-contained, JavaScript sustained by browser support, Excel or SMTP sustained by network effects.
It's not about arguing whether people 'could' use a thing, it's some intangible qualities that mean people actually do use a thing without trying to prove anything. For example, in T-SQL you cannot write "select a,b,c, from thing" because it complains about the trailing comma after the select list. That's annoying because I rarely write the correct thing first time and often edit the select list while I'm working, hitting that annoyance over and over. In SWI Prolog I can't write "?- test(X), % foo(Y)." because it complains about the trailing comma. That's anoying because I rarely write the correct thing first time, etc. If your instinct is to argue that the comma is meaningful, you're in academic feel-superior useless territory. In JavaScript arrays like "x = [1,2,3,];" used to error because of the trailing comma, somehow it was fixed so that's fine and it's a list of three items. If you're on that side of things, that's useful practical territory.
SWI Prolog's error messages are often useless for understanding what's gone wrong; "argument insufficiently instantiated in {term-expanded deep CLPFD internals}" or "{expanded foldl or maplist internals}" rarely helps find the problem. If your feeling on reading that is either "works for me, wontfix, get good" or "FOSS maintainers don't owe you anything check your entitlement", that can be completely correct and also in useless territory. Rust developers, realising they had an uphill battle to get anyone to care at all, spent a lot of effort on quality of life things like error messages and Cargo behaviour instead of turning people away for being entitled. Canonical did their "thousand papercuts" project for Ubuntu realising it needed more than "if you don't like it, don't use it /smug" to improve Linux on the desktop to the point where people wanted to use it, and one way or another it has become one of the most widely used distros.
Regarding the Peano version of the Ackermann-Péter function, I checked again to
make sure and I never used the word "elegant" anywhere, either on HN or on
Rosetta Code. I would never be so presumptuous, you know? On Rosetta code I
pointed out that my version is "pure" because that's what Prolog code without
side-effects is usually called. I put it in "quotes" because I personally don't
care about purity.
What I care about is simplicity of structure. That's because the simpler the
structure of a program, the easier the prorgam can be learned. You see, my
PhD studies and my research interest are about writing Prolog programs that
learn Prolog programs from Prolog programs and to do that it really helps if all
the Prolog programs are veeery simple, for combinatorial explosion ever lurks.
For example, I'm so delighted that I figured out how to write that "pure"
version of the Ackermann-Péter function not because I'm so full of myself I
think it's "elegant" but because it is simple, and thanks to its simplicity I
could then get my ILP system, Louise, to learn it from a single example:
The is/2 version would be much harder to learn, because of the more complex
structure. Although I think Louise can do that too. Btw, I got Louise to learn
the Function because I was challenged to do it by one of the examiners in my
viva, who figured the Function's furious combinatorial complexity makes it
unlearnable. That's right, but Louise can get around that because it can learn
recursive programs with arbitrary structure from a single example, so it doesn't
have to be trained on any example that triggers the Function's combinatorial
fury. In any case, Ackermann is a toy function only really useful to make a
point about recursion, and that's exactly how I use it also. Who cares if it
blows up with n = 2, m = 4, or with n = 5, m = 6? It blows up quickly anyway.
That's the whole point.
So, your comment above is very stream-of-consciousness and I wonder what
motivated you to write such a mad rant of the internets. I guess you must be
really passionate about the subject you discuss. If so, I understand. You are
not alone :) No, really, I generally agree with you. I asked above what is
"usefulness" because for me Prolog is useful and I don't understand why anyone
would say it isn't. Reading your other comment in the thread, I got the
impression you would agree with me on that.
But it's not about agreement. Aye, the night is long and full of errors and we
must somehow deal with the dirty, dirty real world. Personally, I'm happy with
that. But I do like my tools to also be nice to look at and I find that when I
can make them so, they also tend to work better. For example, cleaner, better
commented code, is easier to read and maintain. I mean, duh. In any case, in my
mind, a well-made tool is a thing of beauty anyway you look at it.
Re: "quality of life", well yea, that. If you were to look at my Louise, I think
you'd find that I've gone out of my way to make it into a system that people can
used. I'm totally inspired to do that by SWI-Prolog, btw. So for example, Louise
has logging facilities up the wazoo and so everytime someone tries to use it and
it breaks, they send me a nice, friendly dump of their logs, and then I can
trudge through them and help my user. Because I want to help my user. I wish
people sent me more bug reports, but I guess the majority find that something
breaks and they don't bother to tell me. Oh mon dieu, la cruelle tourmente.
As another example, I spent considerable effort to make sure that Louise's
"metarules" (second-order background knowledge) are easy to read and use. For
that purpose there's three metarule formats that Louise's pretty-printer can
produce. One is the internal representation of the metarule that's useful for
debugging, one is the user-friendly format where metarules are declared at the
user-level and a final one is the formal notation, with quantifiers, used in the
literature, so users can directly relate the metarules they use to a paper they
read. Here's what that looks like ("chain" is the friendly identifier of that
particular metarule):
By contrast, if you look at the original Meta-Interpretive Learning system,
Metagol (https://github.com/metagol/metagol), this is what "chain" looks like:
If I understand your comments about "quality of life" work correctly, then
that's pretty much what I always try to do and what most people in academia
never do. Guess what, I'm out of the game now. But, well, at least I tried :)
I think many people understand general purpose to mean it's a good language choice for many different and common uses. It has little to do with how Turing complete the language is. Just how well it suits a variety of problem domains.
> For example C is considered general purpose despite not being Turing complete.
Actually, C fails the general purpose language criterion too, because it's simply not a good choice for many different and common uses. It's a good choice for very specific uses at this point.
C treats computer memory as an array of bytes indexed by a pointer type that can take only finitely many values. If the pointer type can be cast losslessly to `long int` - which is usually understood to be 64 bit - then this byte array has precisely 2^64 entries. That implies that C programs have an absolute limit on the amount of memory they can use (2^64 bytes). That's obviously true, because each call to `malloc` fills up another entry of this finite byte array. C therefore can't simulate a Turing Machine, because a Turing Machine can use an unbounded amount of memory.
If you use a file system, or another source of external memory or storage, then the above argument no longer applies.
A long time ago computers with limited RAM address space - and problems which didn't fit - used various tricks to deal with that. Overlays, paged memory, virtual memory - all that could effectively address more memory than address space allows.
So I'd argue C-as-specification (not as existing implementation, I don't know) is _theoretically_ Turing complete. Yes I agree it's a troll.
c is only technically not Turing complete because it runs on real machine with finite memory. by that definition, no programming language is turing complete.
Btw, I wanted to note that Prolog is indeed very good at any problem that
requires manipulating recursive data structures, like parsing. But that's one
thing that you can do more easily in Prolog than you can in other languages.
It's one of the things it is good for, if you want, not the one thing it is good
for.
For me, what Prolog does best is get all the ceremony of other programming
languages out of my way and let me concentrate on the problem at hand. By
"ceremony" I mean special syntactic constructs. For example, you don't even
declare variables, as such, in Prolog, there's no assignment expressions,
classes and enums and switch statements, there are no for and while loops etc
etc. Prolog instead gives you a very small set of abstract building blocks that
you can use to write any program. It's like programming with Lego bricks, except
you have one kind of brick, a universal Lego brick that you can build anything
you want with.
Which is why it's so good at solving hard problems, because once you're used to
its abstractions and how to combine them, any problem becomes an exercise in
finding the right way to fit the "bricks" together in the right way [1] [2].
That's also why Prolog is hard to learn: imperative languages have ceremonial
constructs (if-then-else, while-do, yield, etc) for a reason. The reason being,
because that makes it easier to write programs. But not easier to _solve
problems_. Not when the problems get hard anyway [3].
So perhaps this is one reason why people say it's not "general purpose"? Because
it's hard to learn, so it's hard to use when you're just trying to write, dunno,
simple list processing programs, but makes things easier when you're trying to
do something inherently difficult, like write a parser? If I think about that, I
could see how Prolog may come across as a language only good for certain kinds
of problem (which is my attempt to interpret "general purpose", though I'd still
be grateful if someone gave me a clear definition that I can grok).
____________
[1] As an example, here's two ways to write the Ackermann–Péter function in
Prolog:
One uses the is/2 predicate, which is an instance of what I mean by "ceremony".
The second uses Peano notation for natural numbers and is basically a direct
translation of the Function's definition on wikipedia (full disclosure: the
second version is mine).
[2] That's also the reason why it's infinitely easier to machine-learn Prolog
programs, as in Inductive Logic Programming, than to learn, say, Python
programs. If you want to loop in Prolog, all you need to do is place a literal
in the body of a clause that has the same symbol as the literal at the head of
the clause: p:- q, p is recursive. So if you want loops, just move your "bricks"
around until they're in the right place, and job done. To machine-learn Python
you have to find a way to compose every single special-purpose expression, and
all the other kinds of loops, and that gets a lot more complicated, more so as a
program grows in size.
It should be noted that you get the same simplicity with languages of the lambda
calculus, hence Inductive Functional Programming is also a thing.
[3] All the high-level abstraction also makes it much harder to read. See the
Peano notation Ackermann-Péter Function above.
>All the high-level abstraction also makes it much harder to read
All that abstraction can also make code easier to read by hiding details irrelevant to a function. Not that you can't do similar abstractions in prolog.
When I wrote my bf-in-prolog implementation [1] to play with the language, I found myself writing functions to create a purposed wad of data and then manipulate or extract from it, so the rest of the program could neatly ignore how that wad of data actually worked internally, in much the same way as I would in any language.
>why Prolog is hard to learn
Prolog is a pretty standard functional programming language, just hidden in a backtracking query runner.
All of the introductions and tutorials generally start from the query runner's point of view, of facts and rules and backtracking with examples of `parent(This,That)` and showing it can identify grandparents or cousins or whatever, so it's little wonder that most programmers aren't going to bother sifting through to see it's just a little pattern-matching haskell/c++-template-language kind of thing.
>you don't even declare variables, as such, in Prolog, there's no assignment expressions
I definitely viewed unification as a form of variable assignment ( understanding it would backtrack if unable to assign ). There is a `=` in there to unify a variable with another variable or a given chunk of data. I used it to initialize data structures and to terminate recursive calls by assigning to the 'out' variable. I understand it's "unifying" rather than "assigning", but the effect is the same if you know one of the variables to be unbound. Otherwise it's just an assertion of equality (unifiability?) between the two values.
> Prolog is a pretty standard functional programming language, just hidden in a backtracking query runner.
Disagree. The fundamental difference (to me) is that in a functional language you write functions that have inputs and outputs. You can write code like this in Prolog, but Prolog encourages the idea that there's no such thing as an input or an output; there are only relations. If you write a function such as
c = a + b
in a functional language you cannot just say "here are c and a; tell me what b is." That idea makes no sense in a functional language, but Prolog will simply tell you what b is. You can even say "here is c; give me an assortment of a's and b's that make the relation true" and Prolog will start listing them off (albeit probably not very efficiently, and one does not typically use this capability for simple arithmetic).
This is a very different way of thinking about functions because they are inherently "bidirectional" and it's one of the things that makes Prolog feel more like a magical query language than a functional language.
This works for simple little functions, but if someone was going to use prolog for anything complex, it would quickly be impossible. There aren't going to be functions that take a rendered page and an alist of values and return the template. Or that reverse hashing. Or really anything very complex.
For using prolog in its original purpose as a database of queryable facts and rules, this is useful, but as a general purpose programming language, I think it's more of a parlor trick.
I don't think it's really about "simple" functions per se.
Bidirectionality is useful where function inversion is meaningful (i.e. not for hashing) and where the domain of the function is explicitly finite or otherwise bounded by constraints. That's exactly the situation when querying databases. Such queries can get quite complicated.
Agreed that it's not that useful for GP programming, but it's very useful for databases and constraint management systems.
>> Prolog is a pretty standard functional programming language, just hidden in a
backtracking query runner.
I'm perplexed by your comment. Prolog is not a functional language. It's a logic
programming language.
The terminology about "queries" and "facts" and "rules" is extremely
unfortunate. I think it's inherited from databases, because of the common
implementation trick of storing Prolog programs in a relational database. But
explaining Prolog in these terms, as is indeed very common, only begets endless
confusion. For instance, there is no "query runner" in Prolog. Because there are
no "queries". What people call a Prolog "query" is really a Horn goal proved
by SLD-refutation with respect to a set of defintie clauses, i.e. a Prolog
program.
The point of all that is automated proofs. It just so happens that proving
first-order logic theories and executing programs is the same, hence we can
write a logic program as a theory and run a proof to execute it.
>> I definitely viewed unification as a form of variable assignment (
understanding it would backtrack if unable to assign ).
Yes, that's a common mistake to make, and not your fault. These things are never
explained in Prolog courses. You can think of unification as a form of equality,
as you say, but in truth, unification is a clever optimisation that makes
execution efficient. It makes it possible to run an SLD-refutation proof without
having to ground the entire program into a set of propositions, first.
But to understand what the hell that means you really need to understand
resolution theorem-proving, and a lot of theory about first-order logic, that is
simply not taught to programmers. I was very confused about all that myself,
right up until the first year of my PhD when I finally had the leisure (and the
need) to sit down and really try to get to grips with the material.
But I don't think one needs a PhD to understand all that. What is really needed
is better teaching that doesn't muddle the water with useless terminology about
"queries" and "facts" and "rules" and so on.
I don't know Haskell but if I understand correctly it uses the Hindley-Milner
type system, which itself uses unification as a type-checker? If so, maybe
that's the similarity you see?
As far as I understand a functional programming language is one where functions
are first-class citizens and hopefully the lambada calculus is the top-level
abstraction. That's not the case with Prolog, which doesn't even have functions
but (like dreamcompiler says) relations which are generalisations of functions,
that don't "evaluate" to anything but "relate" objects.
I think you're trying to understand Prolog using the abstractions you already
know which is, I guess, the way of science. But you have to be careful about
applying the wrong abstractions. For example, a language can't be "functional"
if it doesn't even have functions. In Prolog everything is a definite clause,
and clauses are first-order logic formulae, not functions. The wikipedia article
on definite clauses is I think clear and to the point:
Actually that's the article on Horn clauses, of which definite clauses are a
subset, but that's useful to understand, also. For me, a lot of confusion was
cleared up when I found a good explanation of what is a definite clause.
Why is repeat/0 gross? It implements a failure-driven loop. If you want to force
your code to backtrack over all choice points, that's what you need to do. You
can't get the same result with recursion. For instance if you have a
nondeterministic predicate (definition) and want to loop over all its results,
if you try to do that with recursion you'll just generate the same result at
every step of the recursion. Instead you want to iterate over all the choice
points.
To demonstrate its behaviour you can try that on the SWI-Prolog command line:
?- repeat, between(1,3,I).
I = 1 ;
I = 2 ;
I = 3 ;
I = 1 ;
I = 2 ;
I = 3 ;
I = 1 .
The ";" are where I ask for more results, the "." is where I said "stop". If you
tried to recurse around between(1,3,I), you'd get I bound to "1" every step
through the recursion- and still leave two choice points behind for every "1".
It's been a while since I used repeat/0, but in general it's used to write
generators. It usually makes more sense to use findall/3 and friends, which
simply iterate over choice points without adding new ones. For example:
?- findall(I, between(1,3,I), Is).
Is = [1, 2, 3].
People complain about dirty imperative constructs in Prolog like that all the
time, but they are very useful to a pragmatic programmer and everytime anyone
tries to "fix" Prolog by removing the impurities, they break it.
This is something I appreciate too, that's why I gradually moved toward lisp, sml, haskell and prolog. I find other languages just give me the wrong tools and setups and restrict my solution space and pace a lot.
Has anyone used Mercury? The language seems quite interesting and has C and C++ FFI. I see bower (https://github.com/wangp/bower/) a notmuch client is written in it.
It also has explicit determinism declarations, that avoid what is IMO the largest Prolog problem, that your program unexpectedly gets some exponential slowdown.
Added to that it has an explicit and srtict type system, and much saner numeric arithmetics (mostly because of the strict types). It also has explicit side-effects declarations, so you know if your code is pure or not.
I have never used on anything non-trivial. So I don't know how it copes on the real world. But the language looks very nice.
I just installed and use bower yesterday and also learned of Mercury yesterday. Bower was quite nice and compiling it in a language I'd never known existed was trivial.
I was rather disappointed when I learned Prolog 20 years ago, because of the unfulfilled expectations.
You could program in a totally declarative way, with logic formulas, and the system would take care evaluating the formulas! As a person who likes the math/theory side of CS, that sounded absolutely amazing! Except that it wasn't really like that, because IIRC (20 years have passed so there may be some inaccuracies here, but this is the gist of what I remember) logical operators like "and" or "or" weren't even commutative, "a and b" vs "b and a" could be the difference between running normally and an infinite loop, you had to insert "!" (which were totally non-declarative and non-logic) to make things work as you wanted, and at that point you were thinking about what order things would be run in, so why not use an imperative language instead, where at least the order of execution is clear in the first place.
These are valid concerns, arising mostly from inadequate implementations and a lack of modern features in early (and also current) Prolog systems. In the past 20 years, many new language constructs and implementation improvements were introduced and have become widely available in newer Prolog systems to address precisely these problems among others.
For example, inspired by pioneering systems such as XSB Prolog, modern Prolog systems such as Scryer Prolog support alternative execution strategies, internally implemented via delimited continuations as described by Desouter et al., and thus prevent many infinite loops that arise with the default execution strategy:
As another example, in 2016, Neumerkel and Kral introduced if_/3 and related predicates, which are now available as library(reif) in Scryer Prolog, and also available for SICStus with advanced compile-time expansions:
As yet further examples, better indexing techniques such as JIT indexing and Scryer Prolog's first instantiated argument indexing help to improve determinism and efficiency automatically, and much improved internal string representations allow efficient parsing of large amounts of data with Prolog's built-in grammar mechanism that is now being drafted for standardisation. Constraints such as dif/2 are now also becoming more widely available after decades of being largely ignored by implementors.
As long as we keep to the pure monotonic core of Prolog, conjunction and disjunction are indeed semantically commutative, and different execution strategies are applicable. Of course, using !/0 and other constructs that rely on specific execution strategies forfeits this advantage.
I'm a beginner at prolog, but i always found the logic explanation unsatisfying. I viewed it more as - imagine you construct a graph. Some of the verticies you construct have side effects, then to execute a program you run DFS on the graph you made.
In your second example, I get an exception (it does not "explode"), and this is also not a "bug". In your second example, the default search strategy is unable to derive whether above(a,c) is a logical consequence of the program. The existence of such cases is expected when using the default execution strategy (SLD resolution), which sacrifices completeness for efficiency.
A key advantage of using a declarative programming language such as Prolog is that we can change the execution strategy. For example, I tried your code in Scryer Prolog and added 2 directives as the first two lines to enable an execution strategy called SLG resolution instead of the default strategy:
So, with a different search strategy, the Prolog system is also able to derive that above(a,c) holds, like Clingo.
Note that answer set programming cannot help in any way when I need a program to perform side effects such as initiating a network connection, hosting a web page, parsing a file etc., whereas Prolog is extremely useful and appropriate for such applications. So, the fact that the default execution strategy of Prolog is not adequate to solve this specific example is not a sensible argument for or against Prolog.
As triska says, there are different execution strategies for Prolog and
left-recursion is not a problem in practice. To plug my own work, the Inductive
Logic Programming (ILP) system I developed during my PhD
(https://github.com/stassa/louise) can learn left-recursive Prolog programs
thanks to an inductive Prolog meta-interpreter executed with SLG-resolution
(a.k.a. tabling). I mean to say that the inductive meta-interpreter itself is
tabled, so it can prove left-recursive programs without entering an infinite
recursion (it needs to do that because it learns a program by proving its
generalisation). And so it can learn left-recursive programs without "going
infinite". That used to be a big problem for ILP, but it's now well and truly
solved, in theory as well as in practice. And I say "now" but SLG-resolution is
old as a a very old thing; Tamaki and Sato described it in 1986, I believe:
>> Prolog is deeply unsuited for work, use Clingo instead.
Well, that doesn't make sense. ASP doesn't even allow lists, let alone
side-effects (you know, like writing to a stream), so what "work" is it that
it's better suited for, than Prolog?
Also, if I remember correctly, Clingo is not a language but a grounder? It takes
a first-order program and grounds the entire Herbrand base into a set of
propositions so that a SAT solver can then derive the stable models of the
program? Do I have that wrong?
Let me say up front that I've never heard a worse idea in my life, and I've
heard a few. First of all, that's why ASP can't use lists, or other functions,
because then the Herbrand base goes infinite and good luck grounding it. Second,
SAT-solving is NP-complete. As I realised recently
(https://news.ycombinator.com/item?id=34072923) ASP is an elegant framework for
dealing with uncertainty in a purely symbolic manner, with negation as failure
and axiomatic, classical negation. But then, why go and mess this elegance up
with a dumb propositionalisation approach that loses all the good stuff about
first order logic, and that's terminally inefficient to boot? That, I still
don't get.
And that is so much better because it's executed top-down, with unification like in Prolog, and so it doesn't have to ground the entire Herbrand base and make little kittens called Skolem cry.
I am completely mystified by what the "ASP Language Standard" is. The clingo page references it and the link it cites provides literally no contextual information about it.
In the early 1990s, I knew one team was using Prolog as the rich database report language for a greenfield complex new workstation platform that was otherwise written in C.
I was surprised, because Prolog already seemed like one of those exotic things that you learned in school was used in AI research, or had been, but hadn't heard of it being used commercially at the time.
I have several books on Prolog that I still need to get through, but the one I have read (and need to finish) is Thinking as Computation. It’s a great book.
tangentially, it's possible to convert any neural network into an equivalent decision tree. technically rule based, but incomprehensible due to the sheer number of layers
At least for designing APIs, I find pydantic + FastAPI (and now SQLAlchemy 2) to be an excellent platform to do the initial spec, and the beauty of it is that it automatically produces the swagger docs as well as a fully functioning API interface as well!
https://www.brz.gv.at/en/BRZ-Tech-Blog/Tech-Blog-7-Symbolic-...
In this way, several interesting questions can be answered, such as: Are there grants that have the same prerequisites? How many companies are not eligible for any grant? Are there grants where eligibility also implies other grants? etc.
Prolog is becoming more interesting now that several conforming and free implementations with comprehensive features arise, notably Scryer, Tau and Trealla Prolog as the most recent systems, and interesting bindings for other languages such as ichiban/prolog for Go. On the Datalog side, Google have recently published a Datalog system called Mangle, which supports the Datalog subset of Prolog as a proper subset.