When installing NetBSD/sparc, use a terminal type of "xterm" rather
than "sun", as anita is more likely run from an xterm or other
ANSI-like terminal than from a sun console.
In the BUGS section of the man page, mention the specific NetBSD ports
affected.
Fix typos in the man page.
Dnsruby 1.49 now required (for correct zone parsing)
ldns 1.6.6 is required to fix the zone fetcher bug
Bugfixes:
* ods-control stop did not stopped zone fetcher (bug was introduced in 1.1.0)
* Auditor correctly handles chains of empty nonterminals
* Zone fetcher can block zone transfers if AXFR once failed.
This is a bug in ldns versions 1.6.5 and lower.
See KNOWN_ISSUES for more information.
* Bugreport #165: Ensure Output SOA serial is always bigger than Input SOA serial.
* Bugreport #166: Correct exit value from signer.
* Bugreport #167: Zone fetcher now also picks up changes when zonelist is reloaded
* Bugreport #168: ods-control with tightened control for the Enforcer
* Bugreport #169: Do not include config.h in the distribution
* Bugreport #170: Typo in a man page (ods-signer)
* Bugreport #172: Correction of some macros in a man page (ods-timing)
* Bugreport #173: A man page used a macro that does not exist (ods-ksmutil)
eekboard is a virtual keyboard software package which ships with a standalone
virtual keyboard application (eekboard), and a library to create keyboard-like
UI (libeek).
* Lots of little incremental bug fixes and enhancements in this release.
* Finally got some fixes out there for you Yahoo users behind some
particularly annoying firewalls and proxies, among other fixes. Enjoy!
Changes 2.7.2:
* We discovered a security issue in Pidgin 2.7.0 and 2.7.1 and decided to
release a patched version quickly. This release contains the fix for that
crash, and a few other minor fixes.
* More string format fixes in silcd and client libary
* configure: changed AC_PROG_LIBTOOL order to fix disabling shared libs
* configure: check threads support in OpenBSD
* Fixed string format vulnerability in client entry handling
* Reported and patch provided by William Cummings
* silcd: Fixed IDENTIFY command reply handling for channels
Changes 1.1.18 (server):
* silcd: Added heartbeat support
* Added support for sending SILC_PACKET_HEARTBEAT packets to connection,
to make sure they keep alive and to detect if they have died
* Set SO_KEEPALIVE for all accept()ed sockets
* silcd: Fixed SIGUSR1 signal handling
* Fixed the SIGUSR1 signal handling which can be used to dump the server
internals to /tmp.
* Changed also End of Stream handling to handle NULL idata pointer instead
of ignoring the EOS in case it is NULL.
* Changed also the DETACH timeout handling to use the packet stream
directly instead of looking up client in the callback
* More string format fixes in silcd and client libary
* fix 'hh' not handled correctly.
* Don't use "==" in sh script (Issue#2).
* Don't set $ENV in Makefile (Issue#3).
* Add "Initial Input Mode" configuration.
* Add test for zl.
* Fix zl conversion.
* Add links to external resources.
We intentionally wire down the 'libswanted' list in the package Makefile, so
don't let the hints file add new libraries that may be found outside Pkgsrc
control.
Fixes build on Gentoo and SuSE systems, and possibly other Linux systems too
that might have stray -lgdbm_compat libraries lying around.
CHANGES MADE TO MATHOMATIC 15.2.0 TO BRING IT UP TO THE NEXT VERSION:
All makefiles were improved. Library test/example program is renamed to "testmain".
Package maintainers please take note: support for the DESTDIR environment variable was
added to the makefiles; for proper operation when packaging version 15.2.1 or higher,
please remove any patches for missing DESTDIR support.
m4 Mathomatic should work now when included in the Mathomatic package (make m4install).
Thank you for packaging Mathomatic! If I did anything wrong, please let me know.
8/26/10 - Added the -e option, which processes mathematical expressions and Mathomatic commands
instead of input files on the shell command line. For example, entering
"mathomatic -eq 2+3" gives "answer = 5". This functionality has been requested
many times by Mathomatic command line users. A complete example:
CHANGES MADE TO MATHOMATIC 15.1.6 TO BRING IT UP TO THE NEXT VERSION:
Minor improvements were made to the user documentation.
8/22/10 - Removed "Complex number roots approximated" warning message, since this happens often.
Capitalized E, I, PI, and Pi are now accepted as the universal constants e, i, and pi,
without needing to enter "set no case". This allows Mathomatic to easily
accept Mathematica style expression input.
m4 Mathomatic now additionally accepts Mathematica style capitalized function input.
matho and rmath now display elapsed, CPU, and system times in seconds upon exit.
8/23/10 - Fixed #equation-number entry at the main prompt to always work and allow an expression
or equation following on the same line to be entered at that equation space.
For example: "#10 y=1/x" will work now;
previously only worked if equation space number 10 was previously allocated and used.
The way it works is: all equation spaces up to and including number 10 are allocated,
if not already allocated, upon entry of "#10".
Equations spaces are allocated with the memory allocator malloc(3).
CHANGES MADE TO MATHOMATIC 15.1.5 TO BRING IT UP TO THE NEXT VERSION:
7/26/10 - Disabled ncurses call for auto-color detection when CYGWIN is defined while
compiling the source code, due to a reported problem of readline failing with
ncurses in Cygwin.
7/28/10 - Disabled readline history save file for the Cygwin port,
because it is a filename that starts with a period.
7/31/10 - Added "set fractions_display" option, to allow disabling the automatic conversion of
fractions like .5 to 1/2 for display.
Developer requested and useful in the symbolic math library,
when numerical fraction output isn't wanted.
8/01/10 - Fixed a memory leak when ignoring the output string in the symbolic math library.
8/04/10 - Preserve overflowed powers like 2^2222 rather than aborting with an error message.
Allow simplification of math like 2*2^2222 and 2/2^2222.
8/08/10 - matho-primes runs twice as fast with the -ffast-math gcc compilation option,
which is now enabled by default. Don't try -ffast-math with the main Mathomatic
program though, because then Mathomatic won't work properly.
CHANGES MADE TO MATHOMATIC 15.1.4 TO BRING IT UP TO THE NEXT VERSION:
Many minor tweaks and improvements.
7/03/10 - Makefiles and compile scripts were corrected and enhanced
per http://www.gnu.org/prep/standards/
7/06/10 - Changed all "#if true" and "#if false" conditional commenting to "#if 1" and "#if 0"
in the C source code, thanks to Min Sik Kim of NetBSD pkgsrc.
CHANGES MADE TO MATHOMATIC 15.1.3 TO BRING IT UP TO THE NEXT VERSION:
All of the Unix man pages and user manuals for Mathomatic were fixed.
The compare and "solve verify" commands now simplify more thoroughly with "repeat simplify"
for better expression equality determination.
6/17/10 - Greatly improved file operation error reporting by using the perror(3) function.
6/18/10 - Fixed categories in "icons/mathomatic.desktop";
Mathomatic now goes under valid categories, mainly Education.
CHANGES MADE TO MATHOMATIC 15.1.2 TO BRING IT UP TO THE NEXT VERSION:
6/6/10 - I made mistakes in the improvement to the simplify command of version 15.1.2,
the original working simplify logic of version 15.1.1 is now restored, sorry.
CHANGES MADE TO MATHOMATIC 15.1.1 TO BRING IT UP TO THE NEXT VERSION:
A general cleanup was done.
A small improvement was made to the final result of the simplify and fraction commands.
Showing intermediate results in the calculate, sum, and product commands is now done with "set debug 1".
5/28/10 - Added "tests/collatz.in", the Collatz conjecture as an automatically computable equation.
CHANGES MADE TO MATHOMATIC 15.1.0 TO BRING IT UP TO THE NEXT VERSION:
Code and documentation cleanup.
5/21/10 - Added "primes/matho-sum", a utility that sums its command line arguments or standard input.
Use "matho-primes 0 2000000 | matho-sum" to find the sum of all primes less than 2,000,000.
Solves Project Euler problem #10: http://projecteuler.net/index.php?section=problems&id=10
5/23/10 - Primes (') are allowed in variable names now, if not using the symbolic math library, so that the
derivative, integrate, and nintegrate commands can change the dependent variable to y', y'', etc.
This can be turned on in the symbolic math library by the command "set special_variable_characters='".
Non-alphanumeric characters in variable names are now converted to underline characters (_)
when exporting to a programming language or to a different program.
CHANGES MADE TO MATHOMATIC 15.0.8 TO BRING IT UP TO THE NEXT VERSION:
5/10/10 - Added "help constants" command.
5/11/10 - Integer variables are now specified by using a variable name that starts with "integer",
like "integer1", "integer_x", etc. Currently only the modulus operator "%" checks
for integer variables, to help with simplification.
5/12/10 - Corrected the output string type of the symbolic math library API. It was erroneously declared
as type "const", possibly causing a memory leak.
5/13/10 - Moved and adapted "makefile.lib" to "lib/makefile", so the symbolic math library build is isolated.
Previously "make clean" was required between different builds. All makefiles require GNU make now.
CHANGES MADE TO MATHOMATIC 15.0.7 TO BRING IT UP TO THE NEXT VERSION:
4/26/10 - Allow "make pdf" to generate PDF documentation from the HTML documentation with htmldoc.
Please read the comments in the makefile for all available options.
4/29/10 - For every makefile, CFLAGS has been modified to include OPTFLAGS as required by the Fedora Linux
build system, and OPTFLAGS defaults to the optional gcc specific flags like optimization.
In the symbolic math library, made available the equation number of the result of calling the API,
if also stored in an equation space. The result equation number is stored in the global "result_en".
Useful if you want to know where the result was stored, to act on it with further commands.
CHANGES MADE TO MATHOMATIC 15.0.6 TO BRING IT UP TO THE NEXT VERSION:
Corrections and improvements to the documentation were made.
4/3/10 - Vastly improved the "factor number" user interface, now factors integer expressions like 2^32-1.
"factor number" is disabled in library mode.
4/13/10 - The real and imaginary commands no longer fail when the expression is not complex,
just a warning is given.
4/14/10 - Changed normal display of "-1*" to "-", for prettier 2D expression output,
so things like "-a" display properly, not as "-1*a".
CHANGES MADE TO MATHOMATIC 15.0.5:
1/28/10 - Added a Python utility called "primorial" to the Prime Number Tools install
that multiplies together the results of matho-primes, displaying the primorials
of the integers given on the command line.
3/18/10 - Changed author email address to "gesslein@linux.com".
3/23/10 - Catch SIGHUP and SIGTERM signals for proper termination of the Mathomatic program;
readline was messing up when Mathomatic was terminated by closing the shell window.
The plot command now always plots expressions with grid marks displayed for reference.
CHANGES MADE TO MATHOMATIC 15.0.4:
1/21/10 - In the makefile, changed the HTML man page generator back to rman because
groff HTML output looks really bad and rman allows linking to other man pages.
groff is no longer used.
1/24/10 - Fixed "make m4install", the installed rmath and matho programs weren't working.
1/27/10 - Added GNU LGPL license notices to every C source file with a copyright notice,
for proper protections.
CHANGES MADE TO MATHOMATIC 15.0.3:
1/9/10 - Fixed a problem only in the version 15.0.3 makefile, where it didn't respect
the CC environment variable set by the user, instead it always used "gcc"
as the C compiler.
CHANGES MADE TO MATHOMATIC 15.0.2:
12/27/09 - Moved get_screen_size() from main.c to am.c because it is used in the library when
compile-time options UNIX or CYGWIN are defined.
Thanks to Cygwin port maintainer Reini Urban for noticing and fixing this problem.
Defining UNIX or CYGWIN in library mode is not recommended.
12/31/09 - Added code to allow any command to be preceded by "repeat", which sets the
repeat flag for the following command. Most commands ignore the repeat flag.
1/1/10 - Ported divide and roots commands to be repeatable. Also repeatable are the
calculate and eliminate commands.
1/2/10 - Allow Taylor series computation even if the specified differentiation variable
is not found in the expression, giving a warning.
Ported simplify command to be a repeatable full simplify; that is, typing
"repeat simplify" repeatedly runs the simplify command until the result stabilizes
to the smallest size expression.
1/7/10 - Thanks to pretty C code submitted by Simon Geard,
the code and variables commands have been made much more readable.
1/8/10 - Made Mathomatic easier to compile under Solaris, thanks to Michael Pogue of Sun.
Fixed failure to compile under BSD Unix when compiling with readline support.
CHANGES MADE TO MATHOMATIC 15.0.1:
12/19/09 - The last few versions fix the ugliness caused by the GCD factoring change made on 6/22/09.
Today's change factors out the numerical GCD of rational coefficients as needed to simplify.
Most simplification results should be beautiful and the simplest possible again now,
without the misleading observed magnitude caused by always factoring out the GCD,
which was why the change of 6/22/09 was made.
CHANGES MADE TO MATHOMATIC 15.0.0:
12/12/09 - Fixed a problem with the -q (quiet mode) option being ignored if the session options
were ever saved with the "set save" command.
12/13/09 - Added code to allow Mathomatic output to be redirected by default.
Fixed the derivative command to be successful even when the result is 0,
when compiled as a library.
12/14/09 - Added ability to log symbolic math library results, and made command behavior
more consistent in the library by always returning the final result string.
12/16/09 - The factor command now factors more by factoring out the GCD of rational coefficients.
CHANGES MADE TO MATHOMATIC 14.6.3 TO BRING IT UP TO 15.0.0:
Cleanup and more bug fixes.
11/26/09 - Added detection of the terminal's ANSI color availability, when readline is enabled.
11/28/09 - Added detection of divide by zero and NaN when using the "solve verify" command,
for more correct results.
Solving now factors out the GCD of rational coefficients, for improved results.
The fixes today are from errors solving equations like (2*x/(x - 3)) + 3 = 6/(x - 3)
11/29/09 - Disallow the variable named "nan". NaN cannot be directly entered into Mathomatic.
11/30/09 - Added shell scripts "t" and "tests/t" to easily test Mathomatic by typing "./t".
12/2/09 - The fraction command now factors out the GCD of rational coefficients like the
solve command does, so that coefficients in algebraic fractions become integers.
The GCD verifying routine was perfected by making it very strict, like it should be.
changes in sbcl-1.0.42 relative to sbcl-1.0.41
* build changes
** Cross-compilation host is now specified to make.sh using
command-line argument --xc-host=<command> instead of a positional
argument. (thanks to Daniel Herring)
** Install location can be specified to make.sh using command-line
argument --prefix=<path>. (lp#550889s, thanks to Daniel Herring)
* optimization: The default implementation of
COMPUTE-DISCRIMINATING-FUNCTION does much less wasted work.
* enhancement: Explicit memory barrier operations are now available for use
by multithreaded code. See documentation for details.
* enhancement: Experimental support for threading on Linux/PPC.
* bug fix: RENAME-PACKAGE returns the package. (Thanks to Eric Marsden)
* bug fix: EXPT signals an error if first argument is a zero and second
argument is a floating point zero. (lp#571581, thanks to Roman Marynchak)
* bug fix: DEFTYPE signals an error for non-list lambda-lists.
(lp#576594, thanks to Roman Marynchak)
* bug fix: make ASDF-INSTALL compatible with the now-included ASDF2.
(lp#612998, reported by Phil Hargett; patch from Jim Wise)
* Fix ldns_rr_clone to copy question rrs properly.
* Fix ldns_sign_zone(_nsec3) to clone the soa for the new zone.
* Fix ldns_wire2dname size check from reading 1 byte beyond buffer end.
* Fix ldns_wire2dname from reading 1 byte beyond end for pointer.
* Fix crash using GOST for particular platform configurations.
* extern C declarations used in the header file.
* Removed debug fprintf from resolver.c.
* ldns-signzone checks if public key file is for the right zone.
* NETLDNS, .NET port of ldns functionality, in contrib.
* Fix handling of comments in resolv.conf parse.
* GOST code enabled if SSL recent, RFC 5933.
* bugfix #317: segfault util.c ldns_init_random() fixed.
* Fix ldns_tsig_mac_new: allocate enough memory for the hash, fix use of
b64_pton_calculate_size.
* Fix ldns_dname_cat: size calculation and handling of realloc().
* Fix ldns_rr_pop_rdf: fix handling of realloc().
* Fix ldns-signzone for single type key scheme: sign whole zone if there
are only KSKs.
* Fix ldns_resolver: also close socket if AXFR failed (if you don't,
it would block subsequent transfers).
* Fix drill: allow for a secure trace if you use DS records as trust
anchors.
1.6.5
* Catch \X where X is a digit as an error.
* Fix segfault when ip6 ldns resolver only has ip4 servers.
* Fix NSEC record after DNSKEY at zone apex not properly signed.
* Fix syntax error if last label too long and no dot at end of domain.
* Fix parse of \# syntax with space for type LOC.
* Fix ldns_dname_absolute for escape sequences, fixes some parse errs.
* bugfix #297: linking ssl, bug due to patch submitted as #296.
* bugfix #299: added missing declarations to host2str.h
* ldns-compare-zones -s to not exclude SOA record from comparison.
* --disable-rpath fix
* fix ldns_pkt_empty()
* fix ldns_resolver_new_frm_fp not ignore lines after a comment.
* python code for ldns_rr.new_question_frm_str()
* Fix ldns_dnssec_verify_denial: the signature selection routine.
* Type TALINK parsed (draft-ietf-dnsop-trust-history).
* bugfix #304: fixed dead loop in ldns_tcp_read_wire() and
ldns_tcp_read_wire_timeout().
* GOST support with correct algorithm numbers. The plan is to make it
enabled if openssl support is detected, but it is disabled by
default in this release because the RFC is not ready.
* Fixed comment in rbtree.h about being first member and data ptr.
* Fixed possibly leak in case of out of memory in ldns_native2rdf...
* ldns_dname_is_wildcard added.
* Fixed: signatures over wildcards had the wrong labelcount.
* Fixed ldns_verify() inconsistent return values.
* Fixed ldns_resolver to copy and free tsig name, data and algorithm.
* Fixed ldns_resolver to push search onto searchlist.
* A ldns resolver now defaults to a non-recursive resolver that handles
the TC bit.
* ldns_resolver_print() prints more details.
* Fixed ldns_rdf2buffer_str_time(), which did not print timestamps
on 64bit systems.
* Make ldns_resolver_nameservers_randomize() more random.
* bugfix #310: POSIX specifies NULL second argument of gettimeofday.
* fix compiler warnings from llvm clang compiler.
* bugfix #309: ldns_pkt_clone did not clone the tsig_rr.
* Fix gentoo ebuild for drill, 'no m4 directory'.
* bugfix #313: drill trace on an empty nonterminal continuation.
Pkgsrc changes:
- adjust dependencies
- set PERL5_MODULE_TYPE to Module::Install::Bundled
- placate pkglint about whitespace
Upstream changes:
1.09 Thu 19 Aug 2010 19:08:55 UTC
- remove blib for PAUSE indexing.
1.08 Thu 19 Aug 2010 18:08:42 UTC
- Temp files now preserve the suffix of the uploaded file. This makes
it possible to feed the file directly into a mime-type-determing
module that may rely on this suffix as part of its heuristic. (Dave
Rolsky)
- Fix for RT#54443 Xforms buffering incorrectly (Simon Elliott)
- Move to Dist::Zilla
Pkgsrc changes:
- adjust dependencies
Upstream changes:
1.04
- fixed local $@ issue. this happens on some version of perl5.
1.03
- release to cpan
- fixed win32 issue(charsbar)
1.02_02
- use randomness on finding empty port(suggested by kazuhooku)
- try to connect the port before bind(Tatsuhiko Miyagawa)
1.02_01
- better cleanup code by RAII pattern.
https://rt.cpan.org/Ticket/Display.html?id=60657
(reported by dgl)
1.02
- lazy loading issue was fixed at Test::SharedFork 0.12.
Depend to it.
https://rt.cpan.org/Public/Bug/Display.html?id=60426
(reported by J.)
1.01
- remove unused deps for use_test_base().
1.00
- bump up version!
0.16_02
- oops. packaging miss.
0.16_01
- Do not depend to IO::Socket::INET 1.31.
Test::TCP works well with older IO, I hope.
(suggested by mst)
Version 3.3
-----------------------------
08/25/09: beazley
Fixed issue 15 related to the set_lineno() method in yacc. Reported by
mdsherry.
08/25/09: beazley
Fixed a bug related to regular expression compilation flags not being
properly stored in lextab.py files created by the lexer when running
in optimize mode. Reported by Bruce Frederiksen.
Version 3.2
-----------------------------
03/24/09: beazley
Added an extra check to not print duplicated warning messages
about reduce/reduce conflicts.
03/24/09: beazley
Switched PLY over to a BSD-license.
03/23/09: beazley
Performance optimization. Discovered a few places to make
speedups in LR table generation.
03/23/09: beazley
New warning message. PLY now warns about rules never
reduced due to reduce/reduce conflicts. Suggested by
Bruce Frederiksen.
03/23/09: beazley
Some clean-up of warning messages related to reduce/reduce errors.
03/23/09: beazley
Added a new picklefile option to yacc() to write the parsing
tables to a filename using the pickle module. Here is how
it works:
yacc(picklefile="parsetab.p")
This option can be used if the normal parsetab.py file is
extremely large. For example, on jython, it is impossible
to read parsing tables if the parsetab.py exceeds a certain
threshold.
The filename supplied to the picklefile option is opened
relative to the current working directory of the Python
interpreter. If you need to refer to the file elsewhere,
you will need to supply an absolute or relative path.
For maximum portability, the pickle file is written
using protocol 0.
03/13/09: beazley
Fixed a bug in parser.out generation where the rule numbers
where off by one.
03/13/09: beazley
Fixed a string formatting bug with one of the error messages.
Reported by Richard Reitmeyer
Version 3.1
-----------------------------
02/28/09: beazley
Fixed broken start argument to yacc(). PLY-3.0 broke this
feature by accident.
02/28/09: beazley
Fixed debugging output. yacc() no longer reports shift/reduce
or reduce/reduce conflicts if debugging is turned off. This
restores similar behavior in PLY-2.5. Reported by Andrew Waters.
Version 3.0
-----------------------------
02/03/09: beazley
Fixed missing lexer attribute on certain tokens when
invoking the parser p_error() function. Reported by
Bart Whiteley.
02/02/09: beazley
The lex() command now does all error-reporting and diagonistics
using the logging module interface. Pass in a Logger object
using the errorlog parameter to specify a different logger.
02/02/09: beazley
Refactored ply.lex to use a more object-oriented and organized
approach to collecting lexer information.
02/01/09: beazley
Removed the nowarn option from lex(). All output is controlled
by passing in a logger object. Just pass in a logger with a high
level setting to suppress output. This argument was never
documented to begin with so hopefully no one was relying upon it.
02/01/09: beazley
Discovered and removed a dead if-statement in the lexer. This
resulted in a 6-7% speedup in lexing when I tested it.
01/13/09: beazley
Minor change to the procedure for signalling a syntax error in a
production rule. A normal SyntaxError exception should be raised
instead of yacc.SyntaxError.
01/13/09: beazley
Added a new method p.set_lineno(n,lineno) that can be used to set the
line number of symbol n in grammar rules. This simplifies manual
tracking of line numbers.
01/11/09: beazley
Vastly improved debugging support for yacc.parse(). Instead of passing
debug as an integer, you can supply a Logging object (see the logging
module). Messages will be generated at the ERROR, INFO, and DEBUG
logging levels, each level providing progressively more information.
The debugging trace also shows states, grammar rule, values passed
into grammar rules, and the result of each reduction.
01/09/09: beazley
The yacc() command now does all error-reporting and diagnostics using
the interface of the logging module. Use the errorlog parameter to
specify a logging object for error messages. Use the debuglog parameter
to specify a logging object for the 'parser.out' output.
01/09/09: beazley
*HUGE* refactoring of the the ply.yacc() implementation. The high-level
user interface is backwards compatible, but the internals are completely
reorganized into classes. No more global variables. The internals
are also more extensible. For example, you can use the classes to
construct a LALR(1) parser in an entirely different manner than
what is currently the case. Documentation is forthcoming.
01/07/09: beazley
Various cleanup and refactoring of yacc internals.
01/06/09: beazley
Fixed a bug with precedence assignment. yacc was assigning the precedence
each rule based on the left-most token, when in fact, it should have been
using the right-most token. Reported by Bruce Frederiksen.
11/27/08: beazley
Numerous changes to support Python 3.0 including removal of deprecated
statements (e.g., has_key) and the additional of compatibility code
to emulate features from Python 2 that have been removed, but which
are needed. Fixed the unit testing suite to work with Python 3.0.
The code should be backwards compatible with Python 2.
11/26/08: beazley
Loosened the rules on what kind of objects can be passed in as the
"module" parameter to lex() and yacc(). Previously, you could only use
a module or an instance. Now, PLY just uses dir() to get a list of
symbols on whatever the object is without regard for its type.
11/26/08: beazley
Changed all except: statements to be compatible with Python2.x/3.x syntax.
11/26/08: beazley
Changed all raise Exception, value statements to raise Exception(value) for
forward compatibility.
11/26/08: beazley
Removed all print statements from lex and yacc, using sys.stdout and sys.stderr
directly. Preparation for Python 3.0 support.
11/04/08: beazley
Fixed a bug with referring to symbols on the the parsing stack using negative
indices.
05/29/08: beazley
Completely revamped the testing system to use the unittest module for everything.
Added additional tests to cover new errors/warnings.
Version 2.5
-----------------------------
05/28/08: beazley
Fixed a bug with writing lex-tables in optimized mode and start states.
Reported by Kevin Henry.
Version 2.4
-----------------------------
05/04/08: beazley
A version number is now embedded in the table file signature so that
yacc can more gracefully accomodate changes to the output format
in the future.
05/04/08: beazley
Removed undocumented .pushback() method on grammar productions. I'm
not sure this ever worked and can't recall ever using it. Might have
been an abandoned idea that never really got fleshed out. This
feature was never described or tested so removing it is hopefully
harmless.
05/04/08: beazley
Added extra error checking to yacc() to detect precedence rules defined
for undefined terminal symbols. This allows yacc() to detect a potential
problem that can be really tricky to debug if no warning message or error
message is generated about it.
05/04/08: beazley
lex() now has an outputdir that can specify the output directory for
tables when running in optimize mode. For example:
lexer = lex.lex(optimize=True, lextab="ltab", outputdir="foo/bar")
The behavior of specifying a table module and output directory are
more aligned with the behavior of yacc().
05/04/08: beazley
[Issue 9]
Fixed filename bug in when specifying the modulename in lex() and yacc().
If you specified options such as the following:
parser = yacc.yacc(tabmodule="foo.bar.parsetab",outputdir="foo/bar")
yacc would create a file "foo.bar.parsetab.py" in the given directory.
Now, it simply generates a file "parsetab.py" in that directory.
Bug reported by cptbinho.
05/04/08: beazley
Slight modification to lex() and yacc() to allow their table files
to be loaded from a previously loaded module. This might make
it easier to load the parsing tables from a complicated package
structure. For example:
import foo.bar.spam.parsetab as parsetab
parser = yacc.yacc(tabmodule=parsetab)
Note: lex and yacc will never regenerate the table file if used
in the form---you will get a warning message instead.
This idea suggested by Brian Clapper.
04/28/08: beazley
Fixed a big with p_error() functions being picked up correctly
when running in yacc(optimize=1) mode. Patch contributed by
Bart Whiteley.
02/28/08: beazley
Fixed a bug with 'nonassoc' precedence rules. Basically the
non-precedence was being ignored and not producing the correct
run-time behavior in the parser.
02/16/08: beazley
Slight relaxation of what the input() method to a lexer will
accept as a string. Instead of testing the input to see
if the input is a string or unicode string, it checks to see
if the input object looks like it contains string data.
This change makes it possible to pass string-like objects
in as input. For example, the object returned by mmap.
import mmap, os
data = mmap.mmap(os.open(filename,os.O_RDONLY),
os.path.getsize(filename),
access=mmap.ACCESS_READ)
lexer.input(data)
11/29/07: beazley
Modification of ply.lex to allow token functions to aliased.
This is subtle, but it makes it easier to create libraries and
to reuse token specifications. For example, suppose you defined
a function like this:
def number(t):
r'\d+'
t.value = int(t.value)
return t
This change would allow you to define a token rule as follows:
t_NUMBER = number
In this case, the token type will be set to 'NUMBER' and use
the associated number() function to process tokens.
11/28/07: beazley
Slight modification to lex and yacc to grab symbols from both
the local and global dictionaries of the caller. This
modification allows lexers and parsers to be defined using
inner functions and closures.
11/28/07: beazley
Performance optimization: The lexer.lexmatch and t.lexer
attributes are no longer set for lexer tokens that are not
defined by functions. The only normal use of these attributes
would be in lexer rules that need to perform some kind of
special processing. Thus, it doesn't make any sense to set
them on every token.
*** POTENTIAL INCOMPATIBILITY *** This might break code
that is mucking around with internal lexer state in some
sort of magical way.
11/27/07: beazley
Added the ability to put the parser into error-handling mode
from within a normal production. To do this, simply raise
a yacc.SyntaxError exception like this:
def p_some_production(p):
'some_production : prod1 prod2'
...
raise yacc.SyntaxError # Signal an error
A number of things happen after this occurs:
- The last symbol shifted onto the symbol stack is discarded
and parser state backed up to what it was before the
the rule reduction.
- The current lookahead symbol is saved and replaced by
the 'error' symbol.
- The parser enters error recovery mode where it tries
to either reduce the 'error' rule or it starts
discarding items off of the stack until the parser
resets.
When an error is manually set, the parser does *not* call
the p_error() function (if any is defined).
*** NEW FEATURE *** Suggested on the mailing list
11/27/07: beazley
Fixed structure bug in examples/ansic. Reported by Dion Blazakis.
11/27/07: beazley
Fixed a bug in the lexer related to start conditions and ignored
token rules. If a rule was defined that changed state, but
returned no token, the lexer could be left in an inconsistent
state. Reported by
11/27/07: beazley
Modified setup.py to support Python Eggs. Patch contributed by
Simon Cross.
11/09/07: beazely
Fixed a bug in error handling in yacc. If a syntax error occurred and the
parser rolled the entire parse stack back, the parser would be left in in
inconsistent state that would cause it to trigger incorrect actions on
subsequent input. Reported by Ton Biegstraaten, Justin King, and others.
11/09/07: beazley
Fixed a bug when passing empty input strings to yacc.parse(). This
would result in an error message about "No input given". Reported
by Andrew Dalke.
Version 2.3
-----------------------------
02/20/07: beazley
Fixed a bug with character literals if the literal '.' appeared as the
last symbol of a grammar rule. Reported by Ales Smrcka.
02/19/07: beazley
Warning messages are now redirected to stderr instead of being printed
to standard output.
02/19/07: beazley
Added a warning message to lex.py if it detects a literal backslash
character inside the t_ignore declaration. This is to help
problems that might occur if someone accidentally defines t_ignore
as a Python raw string. For example:
t_ignore = r' \t'
The idea for this is from an email I received from David Cimimi who
reported bizarre behavior in lexing as a result of defining t_ignore
as a raw string by accident.
02/18/07: beazley
Performance improvements. Made some changes to the internal
table organization and LR parser to improve parsing performance.
02/18/07: beazley
Automatic tracking of line number and position information must now be
enabled by a special flag to parse(). For example:
yacc.parse(data,tracking=True)
In many applications, it's just not that important to have the
parser automatically track all line numbers. By making this an
optional feature, it allows the parser to run significantly faster
(more than a 20% speed increase in many cases). Note: positional
information is always available for raw tokens---this change only
applies to positional information associated with nonterminal
grammar symbols.
*** POTENTIAL INCOMPATIBILITY ***
02/18/07: beazley
Yacc no longer supports extended slices of grammar productions.
However, it does support regular slices. For example:
def p_foo(p):
'''foo: a b c d e'''
p[0] = p[1:3]
This change is a performance improvement to the parser--it streamlines
normal access to the grammar values since slices are now handled in
a __getslice__() method as opposed to __getitem__().
02/12/07: beazley
Fixed a bug in the handling of token names when combined with
start conditions. Bug reported by Todd O'Bryan.
Version 2.2
------------------------------
11/01/06: beazley
Added lexpos() and lexspan() methods to grammar symbols. These
mirror the same functionality of lineno() and linespan(). For
example:
def p_expr(p):
'expr : expr PLUS expr'
p.lexpos(1) # Lexing position of left-hand-expression
p.lexpos(1) # Lexing position of PLUS
start,end = p.lexspan(3) # Lexing range of right hand expression
11/01/06: beazley
Minor change to error handling. The recommended way to skip characters
in the input is to use t.lexer.skip() as shown here:
def t_error(t):
print "Illegal character '%s'" % t.value[0]
t.lexer.skip(1)
The old approach of just using t.skip(1) will still work, but won't
be documented.
10/31/06: beazley
Discarded tokens can now be specified as simple strings instead of
functions. To do this, simply include the text "ignore_" in the
token declaration. For example:
t_ignore_cppcomment = r'//.*'
Previously, this had to be done with a function. For example:
def t_ignore_cppcomment(t):
r'//.*'
pass
If start conditions/states are being used, state names should appear
before the "ignore_" text.
10/19/06: beazley
The Lex module now provides support for flex-style start conditions
as described at http://www.gnu.org/software/flex/manual/html_chapter/flex_11.html.
Please refer to this document to understand this change note. Refer to
the PLY documentation for PLY-specific explanation of how this works.
To use start conditions, you first need to declare a set of states in
your lexer file:
states = (
('foo','exclusive'),
('bar','inclusive')
)
This serves the same role as the %s and %x specifiers in flex.
One a state has been declared, tokens for that state can be
declared by defining rules of the form t_state_TOK. For example:
t_PLUS = '\+' # Rule defined in INITIAL state
t_foo_NUM = '\d+' # Rule defined in foo state
t_bar_NUM = '\d+' # Rule defined in bar state
t_foo_bar_NUM = '\d+' # Rule defined in both foo and bar
t_ANY_NUM = '\d+' # Rule defined in all states
In addition to defining tokens for each state, the t_ignore and t_error
specifications can be customized for specific states. For example:
t_foo_ignore = " " # Ignored characters for foo state
def t_bar_error(t):
# Handle errors in bar state
With token rules, the following methods can be used to change states
def t_TOKNAME(t):
t.lexer.begin('foo') # Begin state 'foo'
t.lexer.push_state('foo') # Begin state 'foo', push old state
# onto a stack
t.lexer.pop_state() # Restore previous state
t.lexer.current_state() # Returns name of current state
These methods mirror the BEGIN(), yy_push_state(), yy_pop_state(), and
yy_top_state() functions in flex.
The use of start states can be used as one way to write sub-lexers.
For example, the lexer or parser might instruct the lexer to start
generating a different set of tokens depending on the context.
example/yply/ylex.py shows the use of start states to grab C/C++
code fragments out of traditional yacc specification files.
*** NEW FEATURE *** Suggested by Daniel Larraz with whom I also
discussed various aspects of the design.
10/19/06: beazley
Minor change to the way in which yacc.py was reporting shift/reduce
conflicts. Although the underlying LALR(1) algorithm was correct,
PLY was under-reporting the number of conflicts compared to yacc/bison
when precedence rules were in effect. This change should make PLY
report the same number of conflicts as yacc.
10/19/06: beazley
Modified yacc so that grammar rules could also include the '-'
character. For example:
def p_expr_list(p):
'expression-list : expression-list expression'
Suggested by Oldrich Jedlicka.
10/18/06: beazley
Attribute lexer.lexmatch added so that token rules can access the re
match object that was generated. For example:
def t_FOO(t):
r'some regex'
m = t.lexer.lexmatch
# Do something with m
This may be useful if you want to access named groups specified within
the regex for a specific token. Suggested by Oldrich Jedlicka.
10/16/06: beazley
Changed the error message that results if an illegal character
is encountered and no default error function is defined in lex.
The exception is now more informative about the actual cause of
the error.
Version 2.1
------------------------------
10/02/06: beazley
The last Lexer object built by lex() can be found in lex.lexer.
The last Parser object built by yacc() can be found in yacc.parser.
10/02/06: beazley
New example added: examples/yply
This example uses PLY to convert Unix-yacc specification files to
PLY programs with the same grammar. This may be useful if you
want to convert a grammar from bison/yacc to use with PLY.
10/02/06: beazley
Added support for a start symbol to be specified in the yacc
input file itself. Just do this:
start = 'name'
where 'name' matches some grammar rule. For example:
def p_name(p):
'name : A B C'
...
This mirrors the functionality of the yacc %start specifier.
09/30/06: beazley
Some new examples added.:
examples/GardenSnake : A simple indentation based language similar
to Python. Shows how you might handle
whitespace. Contributed by Andrew Dalke.
examples/BASIC : An implementation of 1964 Dartmouth BASIC.
Contributed by Dave against his better
judgement.
09/28/06: beazley
Minor patch to allow named groups to be used in lex regular
expression rules. For example:
t_QSTRING = r'''(?P<quote>['"]).*?(?P=quote)'''
Patch submitted by Adam Ring.
09/28/06: beazley
LALR(1) is now the default parsing method. To use SLR, use
yacc.yacc(method="SLR"). Note: there is no performance impact
on parsing when using LALR(1) instead of SLR. However, constructing
the parsing tables will take a little longer.
09/26/06: beazley
Change to line number tracking. To modify line numbers, modify
the line number of the lexer itself. For example:
def t_NEWLINE(t):
r'\n'
t.lexer.lineno += 1
This modification is both cleanup and a performance optimization.
In past versions, lex was monitoring every token for changes in
the line number. This extra processing is unnecessary for a vast
majority of tokens. Thus, this new approach cleans it up a bit.
*** POTENTIAL INCOMPATIBILITY ***
You will need to change code in your lexer that updates the line
number. For example, "t.lineno += 1" becomes "t.lexer.lineno += 1"
09/26/06: beazley
Added the lexing position to tokens as an attribute lexpos. This
is the raw index into the input text at which a token appears.
This information can be used to compute column numbers and other
details (e.g., scan backwards from lexpos to the first newline
to get a column position).
09/25/06: beazley
Changed the name of the __copy__() method on the Lexer class
to clone(). This is used to clone a Lexer object (e.g., if
you're running different lexers at the same time).
09/21/06: beazley
Limitations related to the use of the re module have been eliminated.
Several users reported problems with regular expressions exceeding
more than 100 named groups. To solve this, lex.py is now capable
of automatically splitting its master regular regular expression into
smaller expressions as needed. This should, in theory, make it
possible to specify an arbitrarily large number of tokens.
09/21/06: beazley
Improved error checking in lex.py. Rules that match the empty string
are now rejected (otherwise they cause the lexer to enter an infinite
loop). An extra check for rules containing '#' has also been added.
Since lex compiles regular expressions in verbose mode, '#' is interpreted
as a regex comment, it is critical to use '\#' instead.
09/18/06: beazley
Added a @TOKEN decorator function to lex.py that can be used to
define token rules where the documentation string might be computed
in some way.
digit = r'([0-9])'
nondigit = r'([_A-Za-z])'
identifier = r'(' + nondigit + r'(' + digit + r'|' + nondigit + r')*)'
from ply.lex import TOKEN
@TOKEN(identifier)
def t_ID(t):
# Do whatever
The @TOKEN decorator merely sets the documentation string of the
associated token function as needed for lex to work.
Note: An alternative solution is the following:
def t_ID(t):
# Do whatever
t_ID.__doc__ = identifier
Note: Decorators require the use of Python 2.4 or later. If compatibility
with old versions is needed, use the latter solution.
The need for this feature was suggested by Cem Karan.
09/14/06: beazley
Support for single-character literal tokens has been added to yacc.
These literals must be enclosed in quotes. For example:
def p_expr(p):
"expr : expr '+' expr"
...
def p_expr(p):
'expr : expr "-" expr'
...
In addition to this, it is necessary to tell the lexer module about
literal characters. This is done by defining the variable 'literals'
as a list of characters. This should be defined in the module that
invokes the lex.lex() function. For example:
literals = ['+','-','*','/','(',')','=']
or simply
literals = '+=*/()='
It is important to note that literals can only be a single character.
When the lexer fails to match a token using its normal regular expression
rules, it will check the current character against the literal list.
If found, it will be returned with a token type set to match the literal
character. Otherwise, an illegal character will be signalled.
09/14/06: beazley
Modified PLY to install itself as a proper Python package called 'ply'.
This will make it a little more friendly to other modules. This
changes the usage of PLY only slightly. Just do this to import the
modules
import ply.lex as lex
import ply.yacc as yacc
Alternatively, you can do this:
from ply import *
Which imports both the lex and yacc modules.
Change suggested by Lee June.
09/13/06: beazley
Changed the handling of negative indices when used in production rules.
A negative production index now accesses already parsed symbols on the
parsing stack. For example,
def p_foo(p):
"foo: A B C D"
print p[1] # Value of 'A' symbol
print p[2] # Value of 'B' symbol
print p[-1] # Value of whatever symbol appears before A
# on the parsing stack.
p[0] = some_val # Sets the value of the 'foo' grammer symbol
This behavior makes it easier to work with embedded actions within the
parsing rules. For example, in C-yacc, it is possible to write code like
this:
bar: A { printf("seen an A = %d\n", $1); } B { do_stuff; }
In this example, the printf() code executes immediately after A has been
parsed. Within the embedded action code, $1 refers to the A symbol on
the stack.
To perform this equivalent action in PLY, you need to write a pair
of rules like this:
def p_bar(p):
"bar : A seen_A B"
do_stuff
def p_seen_A(p):
"seen_A :"
print "seen an A =", p[-1]
The second rule "seen_A" is merely a empty production which should be
reduced as soon as A is parsed in the "bar" rule above. The use
of the negative index p[-1] is used to access whatever symbol appeared
before the seen_A symbol.
This feature also makes it possible to support inherited attributes.
For example:
def p_decl(p):
"decl : scope name"
def p_scope(p):
"""scope : GLOBAL
| LOCAL"""
p[0] = p[1]
def p_name(p):
"name : ID"
if p[-1] == "GLOBAL":
# ...
else if p[-1] == "LOCAL":
#...
In this case, the name rule is inheriting an attribute from the
scope declaration that precedes it.
*** POTENTIAL INCOMPATIBILITY ***
If you are currently using negative indices within existing grammar rules,
your code will break. This should be extremely rare if non-existent in
most cases. The argument to various grammar rules is not usually not
processed in the same way as a list of items.
Version 2.0
------------------------------
09/07/06: beazley
Major cleanup and refactoring of the LR table generation code. Both SLR
and LALR(1) table generation is now performed by the same code base with
only minor extensions for extra LALR(1) processing.
09/07/06: beazley
Completely reimplemented the entire LALR(1) parsing engine to use the
DeRemer and Pennello algorithm for calculating lookahead sets. This
significantly improves the performance of generating LALR(1) tables
and has the added feature of actually working correctly! If you
experienced weird behavior with LALR(1) in prior releases, this should
hopefully resolve all of those problems. Many thanks to
Andrew Waters and Markus Schoepflin for submitting bug reports
and helping me test out the revised LALR(1) support.
Version 1.8
------------------------------
08/02/06: beazley
Fixed a problem related to the handling of default actions in LALR(1)
parsing. If you experienced subtle and/or bizarre behavior when trying
to use the LALR(1) engine, this may correct those problems. Patch
contributed by Russ Cox. Note: This patch has been superceded by
revisions for LALR(1) parsing in Ply-2.0.
08/02/06: beazley
Added support for slicing of productions in yacc.
Patch contributed by Patrick Mezard.
Version 1.7
------------------------------
03/02/06: beazley
Fixed infinite recursion problem ReduceToTerminals() function that
would sometimes come up in LALR(1) table generation. Reported by
Markus Schoepflin.
03/01/06: beazley
Added "reflags" argument to lex(). For example:
lex.lex(reflags=re.UNICODE)
This can be used to specify optional flags to the re.compile() function
used inside the lexer. This may be necessary for special situations such
as processing Unicode (e.g., if you want escapes like \w and \b to consult
the Unicode character property database). The need for this suggested by
Andreas Jung.
03/01/06: beazley
Fixed a bug with an uninitialized variable on repeated instantiations of parser
objects when the write_tables=0 argument was used. Reported by Michael Brown.
03/01/06: beazley
Modified lex.py to accept Unicode strings both as the regular expressions for
tokens and as input. Hopefully this is the only change needed for Unicode support.
Patch contributed by Johan Dahl.
03/01/06: beazley
Modified the class-based interface to work with new-style or old-style classes.
Patch contributed by Michael Brown (although I tweaked it slightly so it would work
with older versions of Python).
Version 1.6
------------------------------
05/27/05: beazley
Incorporated patch contributed by Christopher Stawarz to fix an extremely
devious bug in LALR(1) parser generation. This patch should fix problems
numerous people reported with LALR parsing.
05/27/05: beazley
Fixed problem with lex.py copy constructor. Reported by Dave Aitel, Aaron Lav,
and Thad Austin.
05/27/05: beazley
Added outputdir option to yacc() to control output directory. Contributed
by Christopher Stawarz.
05/27/05: beazley
Added rununit.py test script to run tests using the Python unittest module.
Contributed by Miki Tebeka.
Version 1.5
------------------------------
05/26/04: beazley
Major enhancement. LALR(1) parsing support is now working.
This feature was implemented by Elias Ioup (ezioup@alumni.uchicago.edu)
and optimized by David Beazley. To use LALR(1) parsing do
the following:
yacc.yacc(method="LALR")
Computing LALR(1) parsing tables takes about twice as long as
the default SLR method. However, LALR(1) allows you to handle
more complex grammars. For example, the ANSI C grammar
(in example/ansic) has 13 shift-reduce conflicts with SLR, but
only has 1 shift-reduce conflict with LALR(1).
05/20/04: beazley
Added a __len__ method to parser production lists. Can
be used in parser rules like this:
def p_somerule(p):
"""a : B C D
| E F"
if (len(p) == 3):
# Must have been first rule
elif (len(p) == 2):
# Must be second rule
Suggested by Joshua Gerth and others.