PELN 0.1 released

I decided to release the PELN Plaintext Electronic Lab Notebook as open source. The current version is PELN 0.1. It’s pretty rough at the moment, but an early release seems in the open-source spirit. The license is very liberal (it’s modeled after the zlib license).

PELN 0.1 links:

PELN the First

I implemented PELN, the Plaintext Electronic Lab Notebook which I had proposed in a preceding post. I followed the original proposal fairly closely, but did make some changes.

“peln.exe” is the executable at the heart of the project, written in ANSI C and utilizing SQLite. There are currently only two options for the executable: (1) “submit” a plaintext file as an entry to the database, (2) “dump” all database entries to the log file “peln_log.txt”.

Option (1) also appends the latest entry to “peln_log.txt” . The log provides a convenient method for searching for entries, leaving one less thing for me to implement (at least for the time-being).

Option (2) is primarily for regeneration of a lost and/or corrupted log file. It was also useful for creating my first log file since I had already submitted entries to the database before the log was implemented. It will be useful for other, similar tasks, like updating the log format.

When an entry is submitted to the database, it is assigned a sequential entry number and timestamped. The timestamp is UTC time, so I’m considering adding local time to the log in addition to UTC. There are other options, such as storing the local submit timestamp in the dB (again in addition to UTC — which is a must).

The code is 327 lines, so I left it entirely in “main.c” rather than splitting it into multiple files. I tried to make sure I checked all points where errors might occur, but it sometimes makes the code hard to follow, so I’ll refactor the error handling at some point. I will also refactor the logging code (eventually) because appending to the log and generating a new log are separate functions, but replicate a lot of code (copy/paste job).

A batch file, “peln.bat”, handles the actual logging session. It creates an empty file named “peln_session_entry.txt” and opens it in Notepad++. In order to launch a new Notepad++ instance and prevent it from re-opening previous files in tabs, the batch file passes the command-line options “-multiInst” and “-nosession”.

The lab notebook entry is typed into Notepad++ and can be saved frequently without issue. Closing the Notepad++ instance returns control to the batch file, which calls peln.exe to submit the newly-created entry. The entry is sequenced, timestamped, and appended to the log.

If there are no errors at this point, the batch file truncates “peln_session_entry.txt” and loops back to the top (opening a new Notepad++ instance). When any errors are reported, the file is not truncated (since it most likely was not recorded).

With an infinite loop like that, and the fact that Notepad++ pops up each iteration, you may wonder how the loop is terminated. One inelegant option is to terminate the console which is running the batch file.

But as it turns out, one of the errors that “peln.exe” reports is attempting to submit an empty file. This means a convenient method for breaking the loop is to close Notepad++ prior to typing anything (or by deleting whatever has been typed). Unfortunately, this generates an error report, but I’ll fix that by detecting an empty “peln_session_entry.txt” in the batch file.

So, there are some things left to be done, but I’m happy with the results so far.

PELN logfile?

In the preceding post, I proposed a simple Plaintext Electronic Lab Notebook application (PELN) which I’m going to implement. While writing the post, I had an idea to use a logfile as the source of the entries, but it is only speculative and is not part of the tier 0 plan, so I’m posting about it separately.

I could set up a software daemon to monitor a log file. New entries are added to the end of the log, and upon save, the daemon responds to the file change notification and compares the new version of the file with the previous. Typically, all changes will be at the end of the log, which is simple to validate by keeping a copy of the previous version. The daemon could record the newly added text as an entry. If there are changes elsewhere in the file, it would need to ask what should be done, or could automatically record version changes.

The problem would be that saving frequently would result in fragments being recorded as entries. I think that could be solved fairly readily by adding a “break” marker (e.g. a special symbol like an asterisk “*” or pound “#” on a line by itself) which indicates the begin/end of entries. Ideally, each time you finish an entry, you would place the break marker and save, otherwise your last entry might not be recorded until the next time you “log on”.

That in turn could be handled by keeping track of the last time the file was saved — if it gets saved again with a break marker immediately following the previous data (even if there are new entries following it), the daemon could recognize that the previous save actually represented a completed entry and record the last save timestamp as the timestamp for the entry.

One thing to keep in mind about this logging idea is that the log differs from the data in the dB. For example, the timestamps would not be present in the log, and the break markers would not be present in the dB. However, it should be possible to reconstruct the log from the database if necessary.

Finally, it would also be possible to update the log file so that it contains the timestamps of entries (even though you do not need to add timestamps manually). To do this, the break marker symbol could be followed by the timestamp. The daemon would recognize this as a “null change” in addition to recognizing the break marker on a line by itself.

PELN: Plaintext Electronic Lab Notebook

I currently keep “lab notebooks” on my computers, both at home and at work. I prefer to use plaintext ASCII format, and I check my notebook into my version control software (VCS). My lab notebooks are simple running logs, in reverse-chronological order, and I manually timestamp each entry.

I usually view lab notes as permanent-records. I try not to go back and modify entries. If I do, I usually add a timestamped note about the changes, and try to remember to check in (to VCS) both the “before” and “after” versions (though if it’s an addition w/o deletion/modification, the “before” isn’t necessary).

However, manual timestamping, manual VCS check in/out, and multiple logs gets burdensome. I’d like an automated solution.

Apparently there are some electronic lab notebook (ELN) software packages available, but I haven’t found any that are both free and simple. Some are online-only or require a rather large server installation. Others are expensive professional collaboration packages. I’d really like an ASCII plaintext package with good version tracking and automated timestamping, but not much else.

So, while I continue to scour the web for existing application, I’m also going to look into creating my own ELN, though I’m not committing to it. It does seem like a good learning project, though.

To that end, if I were going to create an ELN, what features would I expect, and how would it be implemented?

The minimal set of features are:

(1) Automated timestamps.
(2) Automatic tracking of changes to entries, with timestamps (implies viewing history and diffs).
(3) ASCII only.
(4) Use a standard text editor (my pick: Notepad++).
(5) Text search.

In order to implement something like that, an option is to use VCS as a backend, but I have some reservations about that, plus I’d like to work out a database (dB) solution to VCS because it might be useful for asset-pipeline projects in the future. I’ll start with a dB, and SQLite is my first choice at the moment since I only need this to work locally on my PC/laptop — I’ll worry about client/server dBs later.

I’m not certain what I want to do for the GUI at the moment, though maybe I can start with a CLI. The GUI can be built later and “shell out” to the CLI to perform it’s functions (the Linux way, so to speak). Even without a GUI, I’ll be able to visually browse the database with the SQLite browser (as limiting as that may be).

The first order of business would be to create a content addressable store (CAS) which records each lab entry, along with a timestamp.

Initially, version tracking isn’t even necessary. Each entry could be recorded when it is complete. If another version of the “same” entry is recorded later, it just becomes another lab notebook entry.

In any case, the CAS needs to record all lab entries and versions as separate entities. It is not concerned with versions, other than as a possible optimization (i.e. store back-diffs instead of entire entries). However, storage optimization will come later if at all (plaintext is not large by today’s standards).

A question regarding the CAS is whether to store data in blobs or as separate files. Considering that most of the data is not expected to be excessively large, I think blobs may be the way to go, at least for the initial implementation.

Actually, it’s looking like this project would be a lot simpler to begin than I imagined. A tier 0 implementation could be as simple as two CLI commands: (i) create a new lab notebook dB, (ii) record a text file as an entry.

And, strictly speaking, I don’t really need a command to create a new lab notebook — it could be created manually, or the record command could create the notebook dB on first use.

Recording text files as entries is still somewhat cumbersome (plus I would like to have 2 timestamps: one when the entry is begun and another when it is recorded, though the recoding timestamp is more important).

Still, I can probably do a lot with that, for example, write a batch script which creates a file, opens it in a new Notepad++ window (with the command line options “-multiInst” and “-nosession”), then records the text file entry after Notepad++ exits.

Any persistent state which is necessary can be recorded in the dB. This does not appear to be necessary for the proposed tier 0 implementation, but would be handy should entries be opened and later closed, or if version-on-save were added. The CLI could check what files were open and could include an option to record all currently open entries.

What would the dB tables look like?

I would need the CAS table, which would probably be: an ID, a timestamp, and a blob. Records are added and never deleted. That reminds me that I should review SQL constraints and other features so I write the dB (more-or-less) correctly this time. Actually, it might be better to eliminate the ID and use the timestamp as the key (though I need to make sure no two entries ever have the same timestamp, e.g. a previously mentioned option to record all currently open entries would need to wait between each recording).

I was thinking about whether the timestamp should be separate, but it is so tightly bound to the recorded blob that it seems it should be maintained in the same table. This seems in-keeping with 3NF.

That means the first implementation is just one dB table! Wow, this is getting easier all the time. I’m definitely going to give it a try.

Containment catalyst transformation

I still keep thinking about my CA design for the Containment game, but I hadn’t been able to figure out how to make “catalyst” operate the way that I wanted. But I think I am now close to a solution, and the idea I’m looking at right now could add some interesting gameplay mechanics.

Before stepping into catalyst, I should mention that I’m thinking of making light a simple spreading element, just as darkness is (previously, light was a wave with a wavefront and refractory states — I think I prefer the term restitution to refractory, though).

Instead of catalyst allowing light to overtake darkness, what if catalyst changed the properties of light and darkness that it touches?

The first change is that darkness touched by catalyst becomes a new state, let’s call it “grey”. Grey spreads through empty cells, but cannot overtake light. Inside catalyst, grey is impervious to light and dark. Outside catalyst, grey succumbs to either light or dark, with preference to dark.

Similarly, light touched by catalyst becomes a new state, which I’ll call “radiation”. Radiation spreads through empty cells, but cannot be overtaken by dark (independent of catalyst). Outside catalyst, radiation is overtaken by light.

The interaction between the various states would create interesting new mechanics and level-design possibilities. Let’s take a look at how this transforms the mechanics and solves some of the issues that I wanted to address.

First, the interaction of light (and dark) with catalyst is no longer localized to the catalyst. Both radiation and grey can escape beyond catalyst (when the catalyst is not entirely surrounded by light/dark). This opens up new possibilities and choices for the user, who now faces questions like the following:

Should a catalyst object be used to transform light into radiation? Would it be better to use it to change darkness into grey? When is it safe to move a catalyst which transforms light into radiation (because it will turn back to normal light and no longer hold back darkness)?

It also solves the problem of moving catalyst. Imagine a passage with a width smaller than the diameter of a ball of catalyst. On the left side is light, and on the right is darkness. The light and dark arrived at the same time, so the left half of the ball contains radiation, and the right half contains grey (they are at a standoff).

Now, the user grabs the catalyst and starts moving it to the right one cell. The newly covered cells of darkness on the leading (right) edge transform into grey. The newly uncovered radiation on the left becomes exposed and is transformed into light on the next tick (by the bordering light). The radiation and grey near the middle of the ball remain at a standoff.

As the user continues moving the ball of catalyst to the right, the same process repeats, but the quantity of radiation decreases and the ball eventually fills with grey (the “standoff line” is fixed while the ball moves right). However, once grey is exposed on the trailing edge, it turns to light. The user can “paint” through passages to remove darkness, so long as there is light behind the catalyst.

However, the user cannot arbitrarily paint out into darkness. It will still transform the darkness into grey, but radiation cannot be extended, and eventually, the exposed grey will be overtaken by dark. Even radiation would be transformed by light into light, then darkness would overtake the light.

If the user moves the ball in the opposite direction, radiation would continue to hold the darkness back. This wasn’t my original intention, but it seems acceptable. The user must leave the ball in the corridor (unless he finds a way to transform all preceding light into radiation), otherwise light will overtake the radiation, then dark will overtake the light.

A consequence is that there definitely are loss conditions, and some may not be possible for the game engine to detect. So I should give up the idea of the game not having a loss condition. I think that is okay and would make the gameplay a little more intense. Some of the more difficult levels can be “races” to accomplish certain steps before the level devolves into a loss condition. I’ll consider a “slow” gameplay mode as an “easy” option for less dexterous players.

Teledyne goin’ well

I just finished my 4th week back at TAI. Things are going pretty well now compared to the issues I was facing the first few weeks. Things aren’t perfect, but I’ve managed to get to the point where I know what the final electronics and software should be (there will probably be some tweaks, but this design seems pretty solid).

I also learned enough about the existing code base that I can see how to integrate the new elements. I also got to do some (admittedly very simple) digital electronics design, which was a fun change of pace.

There were a few portions of the existing design that had me worried. I’m attempting to repurpose a portion of the hardware (to avoid unnecessary modifications) which is always risky. I checked the schematics and verified the address and data bus logic. Fortunately it all checks out, so I’m GTG.

There are only a few minor issues at the moment. I still need to get hold of some components and other equipment, but it looks like I’ll have the necessary items soon.

All-in-all, a good, productive week. Yay!

Trials n tribulations

I’ve been back at TAI only a couple weeks, but I’ve had a heck of a time with my first project. The project I took over was just barely underway and there was a lot that needed to be done to make any real progress. I had to order Microchip PIC microcontrollers (MCUs) and debuggers, set up the compiler and IDE, fight with 64-bit vs 32-bit Windows issues, etc, etc.

It’s been a long time since I’ve done embedded development, and even longer since I’ve worked with a PIC MCU — so long that back then I had little choice but to program in assembler because the C compiler wasn’t yet ready (though there might have been an pre-alpha version of a C compiler for the venturous).

Anyway, as of yesterday, the project code compiled and ran, but the MCU was not functioning as expected, so I went through all the configuration and setup code and read through the PIC data sheets (plus the compiler manual). It may not seem like much, but there are literally hundreds of configuration bits and parameters with cryptic mnemonic names like RA2, TRISA, __CONFIG, OSCCON, etc. All of the bits and parameters need to be “just so” or the MCU will behave very differently than you expect.

Anyway, today I figured out what the configuration and startup issues were. It was the first time I felt like I made real, verifiable progress. Of course, getting through all the hurdles was necessary work, but it sure didn’t feel like progress. The PIC code is pretty close to fully functional now, and I’ve gained a lot of knowledge about the device architecture, which is at least worth something.

Now on to other parts of the project… sigh…

First week back at TAI

I survived my first week back at Teledyne Analytical Instruments (TAI) just fine. For a moment, I was worried things had gone a bit awry, but a little constructive email dialog straightened things out, and even earned me a kudo.

There actually is one little issue I fretted over somewhat, but since there isn’t anything I can do about it now, I’ll just wait until Monday to see what comes of it.

In any case, I’m glad it was a short week (due to Good Friday) — I was really beat by the end of Thursday. Besides adjusting to the new schedule, I’d been up late each night due to various goings-on at home.

Since it was my first week, there was all the usual “new hire” paperwork, training, and whatnot — or rather more precisely, “re-hire” paperwork, training, and whatnot, which amounts to the same thing. Especially after more than a decade.

Also, since I knew so many people at the company, I kept getting into catch-up conversations with coworkers. It was really nice to see them again, but I didn’t want to spend all day talking and being disruptive, so I took to sneaking around the facility to avoid running into people at inopportune moments. I also sought cohorts during breaks and lunch. During the course of the week, I managed to catch up with most everyone without causing too much disturbance.

Even with all the paperwork and catch-up chat, I managed to get some real work done. I got my work area and computer set up, including setting up a compiler/IDE for an embedded system, and resolved some product design issues with my colleagues.

It was neat to see some of the old products I helped design. Some were still in use, and others had been modified to create new products. Sometimes previously, I’d gotten the feeling that the things I’ve worked on had no lasting value, but here were products I helped create, functioning and useful to those who rely on them — 15 years after the fact in some cases.

I’d forgotten about a piece of automated test equipment which I’d worked on during my first “tour of duty”, but I remembered about it after a couple days. I went searching for it Thursday after production went home. I checked the room it used to be in, but it wasn’t there, so I asked one of the guys if we still had it. Sure enough, they still had it, but it had been moved to a different room.

I remembered it had a large “switch box” (a bit bigger than a large desktop PC case) which used to be mounted up on a wall. But now, there was only a small box up on the wall, so I thought “Oh man, it must have broken down and they replaced it with a smaller unit… For shame!”

But that box didn’t have the kinds of cables I expected, so I started looking around, followed some ribbon cables behind a desk, and found the old switch box hidden behind some boxes under the desk!

It might not seem like much, but for me that was pretty cool. Another nice thing is that one of the newer employees said he recognized my name because my signature was on a lot of company drawings. He politely neglected to mention the certainly dubious quality of the drawings in question [just kidding, folks]. The reason I signed a lot of drawings is that, besides design work, I did production engineering for about a year and processed many Engineering Change Orders (better known as ECOs).

All-in-all, it was a good week, and (for whatever it’s worth) I learned I have at least a minimal corporate legacy after all.

Teledyne tomorrow

Tomorrow will be my first day back at Teledyne in a long, long time (well more than a decade). I don’t know quite what to expect after such a long hiatus, but I’m looking forward to working with my previous cohorts once again.

Objects, CA rules, catalyst radius

I can simplify the Containment design by removing some states, using “level objects” in their place. In addition, since “catalyst” requires a radius value in each cell, I can use the presence of a nonzero catalyst radius as the flag for catalyst (instead of treating it as a “state”).

Now, the required states are:

  1. empty
  2. light
  3. refractory
  4. darkness
  5. annihilation
  6. environment

The state transitions when catalyst is not present are:

0 1 2 3 4 5 DEFAULT
0 1 3
1 3 2
2 3 0
3
4 0
5

The state transitions under catalyst are:

0 1 2 3 4 5 DEFAULT
0 1 3
1 2
2 0
3 2 4 0
4 0
5

Under this new set of states and transitions, light remains a wave, but annihilates with darkness in the presence of catalyst, overtaking darkness at the rate of 1 cell every 4 ticks. This rate may be too slow for the final design, but is sufficient for now.

Since light annihilates with darkness, it limits the distance a beacon may be used to move light beyond the currently lit areas. Without annihilation, it would be possible to capture a ball of light in a beacon and move it anywhere, defeating a large portion of the intended design.

Level objects are stored and managed separate from the CA. The current plan is to manage them entirely in Lua [think of them as simple "script objects"]. A level object affects the CA by injecting a state into a cell.

A light source injects light state (1) every frame, generating a light wave. This means there is no longer a necessary loss condition should all light sources become surrounded by darkness. So for the time being, there will be no loss condition. Care should be taken to avoid creating levels which can become unwinnable.

Darkness is self-sustaining and does not need a constant source. A level object will be used, but it will inject darkness (3) into a cell once at level initialization. Elimination of all darkness is the win condition.

In the future, darkness level objects should be able to specify a radius so that portions of levels can be initialized with dark regions. Ideally, this would be accomplished by running the simulation for a specified number of ticks, but that may be too slow to be practical.

A beacon injects a catalyst radius into a cell.

Catalyst updates as follows: Each cell finds the maximum catalyst radius in the von Neumann neighborhood (including it’s own value), subtracts one, and clamps the value to zero (i.e. must remain non-negative).