Tell me more ×
Programmers Stack Exchange is a question and answer site for professional programmers interested in conceptual questions about software development. It's 100% free, no registration required.

This is part of a series of questions which focuses on the sister project to the Abstraction Project, which aims to abstract the concepts used in language design in the form of a framework. The sister project is called OILexer, which aims to construct a parser from grammar files, without the use of code injection on matches.

Some other pages associated to these questions, related to structural typing, can be viewed here, and ease of use, found here. The meta-topic associated to an inquiry about the framework and the proper place to post can be found here.

I'm getting to the point where I'm about to start extracting the parse tree out of a given grammar, followed by a Recursive Descent parser which uses DFA to discern forward paths (similar to ANTLR 4's LL(*)), so I figured I'd open it up to get insight.

In a parser compiler, what kinds of features are ideal?

So far here is a brief overview of what's implemented:

  1. Templates
  2. Look ahead prediction, knowing what's valid at a given point.
  3. Rule 'Deliteralization' taking the literals within rules and resolving which token they're from.
  4. Nondeterministic Automata
  5. Deterministic Automata
  6. Simple lexical state machine for token recognition
  7. Token automation methods:
    • Scan - useful for comments: Comment := "/*" Scan("*/");
    • Subtract - Useful for Identifiers: Identifier := Subtract(IdentifierBody, Keywords);
      • Ensures the identifier doesn't accept keywords.
    • Encode - Encodes an automation as a series X count of base N transitions.
      • UnicodeEscape := "\\u" BaseEncode(IdentifierCharNoEscape, 16, 4);
        • Makes a unicode escape in hexadecimal, with hex 4-transitions. The difference between this and: [0-9A-Fa-f]{4} is the resulted automation with Encode limits the allowed set of hexadecimal values to the scope of IdentifierCharNoEscape. So if you give it \u005c, the encode version will not accept the value. Things like this have a serious caveat: Use sparingly. The resulted automation could be quite complex.

What isn't implemented is CST generation, I need to adjust the Deterministic automations to carry over the proper context to get this working.

For anyone interested, I've uploaded a pretty printed of the original form of the T*y♯ project. Each file should link to every other file, I started to link in the individual rules to follow them, but it would've taken far too long (would've been simpler to automate!)

If more context is needed, please post accordingly.

Edit 5-14-2013: I've written code to create GraphViz graphs for the state machines within a given language. Here is a GraphViz digraph of the AssemblyPart. The members linked in the language description should have a rulename.txt in their relative folder with the digraph for that rule. Some of the language description has changed since I posted the example, this is due to simplifying things about the grammar. Here's an interesting graphviz image.

share|improve this question
7  
Wall o' text. Don't take this the wrong way, I appreciate a thoroughly explained problem. In this case it is simply a little too verbose. From what I've gathered, you're asking what features should be included in a grammar parser or how to make one without starting from scratch? Please edit to answer the following questions (you don't have to rewrite, simply append to the end in summary): What is your problem? What constraints are you bound by in possible solutions to your problem (it must be fast, it must be LL*, etc.)? – Neil May 7 at 7:22
I'm asking for insight towards the feature set. The focus is on ease of use. The difficulty lies in getting someone who doesn't know the project, insight on the project so they're informed towards its focus. I'm not asking for 'how to do', I'm asking related to usability. Suggestions on how to trim the question are appreciated. – Alexander Morou May 7 at 7:25
I've trimmed the question to the major focus. I'll add context shortly. – Alexander Morou May 7 at 7:33
1  
To me, it is not obvious what the project is all about. For example, since the days of yacc, we have seen lots of parser generators. What is different in your OILexer? What is the new? – Ingo May 10 at 4:35
1  
The goal of this project is to simplify parser generation. Similar yes, to YACC/Bison and FLEX/LEX. The major difference is to avoid the complexity involved in those programs. Keeping things simple and to the point is the main goal. This is why I created a format that is devoid of odd sectioning, but rather the goal is to make it similar to regular programming: only specific to language development. Tokens are defined using ':=' after their name, rules are defined using ::= after their name. Templates use '<' and '>' for their arguments followed by "::=" since they share rule syntax. – Alexander Morou May 10 at 6:06
show 3 more comments

3 Answers

up vote 2 down vote accepted
+50

This is an excellent question.

I've been working on a lot of parsing recently, and IMHO some of the key features are:

  • error reporting. As @guysherman mentioned. When an error is found, I want to know where the error was and what was going on when it happened. Unfortunately, I haven't been able to find good resources for explaining how to generate decent errors when backtracking comes in to play.

  • partial results. When parsing fails, I want to be able to see what was successfully parsed from the part of the input that was before the location of the error.

  • abstraction. You can never build in enough functions. The user will always need more. Is this what you mean by templates?

  • I agree with your #2 (look-ahead prediction). I think it helps to generate good error reports. Is it useful for anything else?

  • support for building the AST as parsing occurs. I hate building a concrete syntax tree, and then writing another pass to turn that into an AST -- I feel that it's more verbose and error-prone. I'm sure there's valid reasons to do that, but I don't know what they are.

  • some kind of logging. I'm not sure about this one; maybe to show a trace of the rules the parser has tried, or to keep track of junk tokens such as whitespace or comments, in case (for instance) you want to use the tokens to generate HTML documentation.

  • ability to deal with context sensitive languages. Not sure how important this one is, either.

share|improve this answer
I've edited the question body to include a link to the PrecedenceHelper in the bullet that reads 'Templates'. It allows parameter array tuples, so if you have four parameters, each parameter arrays, the template must be used in argument sets of four. – Alexander Morou May 18 at 3:48
The main reason that you would construct the CST is the overall layout of the parsed file. If you wanted to prettyprint the document, your best bet is using a CST because ASTs as their name implies lack information to handle the odd spacing that a CST would capture. Transforming a CST is usually pretty easy if it's a good CST. – Alexander Morou May 18 at 16:24
I've edited the question again on the topic of built-in functions available to use. – Alexander Morou May 20 at 23:38
I think I didn't go a great job expressing my point about templates/functions: I meant that because you can never have enough, a system shouldn't try to figure them out ahead of time: the user needs to be able to create his own. – Matt Fenwick May 21 at 2:54
I found one approach particularly useful for error reporting with infinite backtracking (Packrat): each production rule is annotated with error messages (worded as "blah-blah-blah expected"), and upon failure such message is stored in the stream the same way as memoised tokens. If all the failures are not recoverable (parsing terminated before reaching the end of the stream), the rightmost error message (or a collection of such messages) is the most appropriate. That's the easiest thing to do, of course there are ways of refining it further with more annotations. – SK-logic May 21 at 7:57

I'm not experienced in language design, but I've had a go at writing a parser once, when I was creating and IDE for a game engine.

Something that is important to your ultimate end users, in my opinion, is error messages that make sense. Not a particularly earth-shattering point, I know, but following it backwards, one of the key implications of this is that you need to be able to avoid false positives. Where do the false positives come from? They come from the parser falling over at the first error and never quite recovering. C/C++ is notorious for this (although the newer compilers are a bit smarter).

So what do you need instead? I think that rather than just know what is/isn't valid at a point, you need to know how take what isn't valid and make a minimal change to make it valid - so that you can continue parsing without generating false errors relating to your recursive descent getting confused. Being able to build a parser that can generate this information not only gives you a very robust parser, but it opens up some fantastic features for the software that consumes the parser.

I realize that I may be suggesting something really difficult, or stupidly obvious, sorry if this is the case. If this is not the sort of thing you're looking for, I'll happily delete my answer.

share|improve this answer
This is one things I plan on doing. To assist with my knowledge of the domain, a friend of mine suggested writing an actual parser by hand to get the hang of automating it. One thing I realized pretty quickly: parsers are complex and there's things we do by hand which simplify the process. Rules and Templates share the same syntax; however, there are language elements which are valid in Templates but not rules, there's internal states that handle this task. Which brought an idea to mind: rules should be able to specify path aids to make sharing sub-rules easier. – Alexander Morou May 10 at 15:30
... this should be fairly simple to carry over into the automation, but would require the automations to have state-based conditions. I'll work on this some and get back to you. ANTLR uses finite state automations to handle cycles of say: "T"*, where as I'll use it to handle most of the parse process since the reductions should be cleaner as states when there's 800+ variations in a rule (this would bloat quickly as spaghetti code in standard if/else form.) – Alexander Morou May 10 at 15:33

Have you heard of Haskell's parsec? I've only dabbled with syntax, but from what I can tell parser-combinator libraries seem a much more usable and powerful way of writing parsers than parser generators.

share|improve this answer
2  
That defeats the point of writing one. Please read the question. I'm not asking 'What can I use other than what I've been writing for a few years to parse a language?' I'm asking what features a parser compiler would need to be useful. – Alexander Morou May 16 at 1:31
Sorry, as I understand you, you are trying to write a parser generator, which presumably has something to offer more than existing parser generators. I meant less "use parsec" but perhaps take inspiration from parser combinators when deciding what features you want to implement. – Sonarpulse May 16 at 1:46

Your Answer

 
discard

By posting your answer, you agree to the privacy policy and terms of service.

Not the answer you're looking for? Browse other questions tagged or ask your own question.