A “Finished” Testbed Application

As I’ve worked on the past several posts, I had a very specific milestone in mind. You see, I am wanting to use this LabVIEW blog to demonstrate and explain a lot of things that I have learned over the years, but the proper presentation of those topics typically requires a certain amount of pre-existing infrastructure.

That is what I have been doing through the preceding posts — building the conceptual basis for a testbed application that embodies what we have discussed so far. Having this infrastructure is critical for thorough, disciplined learning because without it you are left with example code that may demonstrate one point well, but violates a dozen other principles that are standard “best practices”.

Having said that, however, remember the quotation marks around the word Finished in the title. The marks are there to highlight the fact that, as with all LabVIEW applications, this testbed will never really, and finally finished. In fact, it is planned that it will get better over time, and support more functionality. The best we can say is that the testbed is done for now.

Come next week though, who knows? There’s still a lot to do.

The Testbed Project

Like the UDE template, the current version of the testbed and all its associated VIs is available through the website’s subversion repository at:

http://svn.notatamelion.com/blogProject/testbed application/Tags/Release 1

You can use the web client as you did before, but considering the number of files, you will find it to be less time-consuming (and error prone) to use a dedicated client application like TortoiseSVN on Windows. Regardless of how you get the testbed project onto your computer, the first thing that you see with any application is the project file that is its home. When you open testbed.lvproj note that it embodies two of the conventions that I use on a regular basis.

  1. There are virtual folders for two types of items — both of which are contained in LabVIEW libraries:
    The _ude folder contains the libraries associated with the two UDEs that are currently being used. The _libraries folders contains the two libraries embodying the main functional areas that the application currently uses. The Configuration Management library houses the files that the application uses to fetch configuration information (right now it only needs to read the launch processes). The Startup Processes library contains the VIs that will be launchable when the application starts, and the process-specific VIs that they use. Finally, the Path Utilities library contains the VIs that the code uses to locate the internal project home directory in an executable.
  2. The launcher VI is directly in My Computer, along with its INI file, testbed.ini.
    As I have discussed elsewhere, this naming convention simplifies a variety of tasks such as locating the application INI file.

Looking at the VIs

The remainder of this post will look at the VIs used in creating the testbed application. Right now the discussion will be at a rather high-level as we will be digging into them on a more detailed basis in the future. With each VI, I also list a few of the future topics that I will use that VI to demonstrate. If you would like to see others, let me know in the comments.

Testbed.vi

Here we see pretty much what you would expect. As discussed in the previous post, the code starts by calling a subVI that stores the launcher’s path to a FGV that will make it available to the rest of the application. It then calls the subVI that reads the INI file to get the list of processes to launch.

The array of process descriptions thus retrieved is passed to the main loop that processes each element individually. The LabVIEW documentation does a good job of describing asynchronous call-and-forget so I won’t repeat it here. Instead I will point out the 1 second delay that follows the dynamic call logic. Often times, a new process starting up has a lot it needs to do. This delay gives the new process a 1 second window where the launcher is not doing any potentially time-consuming tasks — like loading into memory the next process it will launch. The delay may not be technically necessary, but I have found that it can sometimes make things run a bit smoother.

Finally, you may be wondering, why there is logic to explicitly close the launcher window? Isn’t there a VI property that tells LabVIEW to automatically close the window if it was originally closed? Well, yes there is, and you would think that it would handle this situation. In fact, it does — but only in the development environment. When you build the application into an executable, the auto-close doesn’t work for the top-level VI window, or at least doesn’t with LabVIEW 2014.

Launcher

    Future Enhancements

  • Showing the “busy” cursor
  • Enhancing user feedback
  • How to launch other standalone executables
  • How to launch VIs that need input parameters

Startup Processes.lvlib:Acquire Data.vi

This is the VI that the original producer loop turned into. The most noticeable change has been the addition of an event structure that both allows the process to respond to a application stop event, and ( via the timeout event) provides the acquisition timing.

Acquire Data

    Future Enhancements

  • Handling errors that occur during initialization
  • Turning acquisition on and off
  • Changing acquisition parameters
  • Correcting for timing “slips” when using the timeout event

Startup Processes.lvlib:Display Data.vi

This VI is the new consumer loop. This VI retains the event loop it had before, but adds an event for stopping the application.

Display Data

    Future Enhancements

  • Using subpanels to generalize your interface code — and avoid tab controls
  • Implementing dialog boxes that don’t stop everything while they are open.
  • The advantages of the “System”-themed controls

In addition to these VIs, there’s no rule that says there can’t be more — like say for handling errors. You may also need additional processes for handling tasks that have differing timing requirements, or communicate through different interfaces. We’ll look at those too.

But for now, feel free to poke around through the code, and if you have any questions ask in the comments. Also try building the application and perhaps even the installer.

Until next time…

Mike…

Getting Everything Running

I left you last time stuck on the horns of a dilemma. To wit: It’s not very hard to create LabVIEW applications structured as multiple parallel processes that are for the most part operating independently of one another. However, now you are going to need some way to launch all those independent processes and manage their operation.

These are some of the issues that we’ll deal with this time, but first you may be wondering if this methodology is really worthwhile. Specifically, what benefits are to be garnered? In a word: modularity — on steroids (Ok, that’s three words. So sue me).

The Case for Autonomous Processes

A few posts ago, I dealt in broad terms with the issue of modularity and mentioned that effects both the design and implementation of an application. Consequently, modularity has implications that can reach far beyond simply the structure of an application. Modularity can also effect such diverse, and apparently unrelated, areas as resource allocation and even staffing.

For example, let’s say you have an application that needs to acquire data, display data, and manage system events that occur. Now let’s say you structure this application such that it has three loops on the same block diagram — or worse, one big loop that tries to do everything. How many developers can effectively work on the application at once? One.

Oh you might be able to split off a few random tasks like developing lower-level drivers, or prototyping the user interface, but when crunch time comes the work will fall to one person. Heaven help you if that one person gets sick. Or that one person gets hit by a bus. Or that one person gets tired of dealing with the stress caused by being the project critical path incarnate and quits.

And don’t forget that one developer is a best-case scenario. Such projects can easily turn into such a mess that there is no way anyone can “effectively” work on them. If you get thrust into such a situation, and you don’t have a manager that really and truly has your back, the best you can hope for is to survive the encounter. (And, yes. I am speaking from personal experience.)

But let’s say that you decide use a different modularization. Let’s say that you break the application into three independent, but interacting, processes. Now how many people can effectively work on the project at once? Three. Likewise, at crunch time how many people can assist with integration testing and validation? Three — and each one with a specific area of expertise in the project domain.

Oh, in case you’re wondering, “three” is a worst-case scenario. Depending how each of those three pieces are structured, the number of possible concurrent operations could go much higher. For instance, the lead developer of the DAQ module might decide to create separate processes for DAQ boards and GPIB instruments; or the GUI developer might identify 4 separate interface screens that will run independently, and so can be developed independently. Now you are up to 7 parallel development tasks, and 7 pairs of hands that can pitch in at crunch time.

So in terms of justification, ‘Nuff said… All we have to do now is figure out where to put the logic that we need.

Making a Splash [Screen]

Did you ever wonder why so many large applications have big splash screens telling you about the program you are launching? Is narcissism really running rampant in the software industry? Or is there a little slight-of-hand going on here?

Part of the answer to those questions lies in a magic number: 200. That number is important because that is the approximate number of milliseconds that average users will wait for a program to respond before they begin to wonder if something has gone horribly wrong. Any way you slice it 200-msec isn’t remotely long enough to get a large program loaded into memory and visible to the user.

So what is a developer to do? As quickly as possible put up an appealing bit of eye-candy the implicit (and occasionally explicit) purpose of which is to congratulate the user for having the wisdom and foresight to decide to launch your program. Then while the user is thus distracted, get everything else loaded and running as quickly as you can.

That is why programs have splash screens. It also points to one of the unique aspects of a splash screen: Unlike most VIs, what you see on the front panel of a splash screen often has little or nothing to do with what the VI is actually doing. Or put another way, splash screens are a practical implementation of the concept, “Ignore the man behind the curtain!”

Designing a Launch Pad

As you should expect from me by now, the first thing to do is consider the basic requirements for this launcher VI and list the essential components:

  1. A list of processes to be launched. As a minimum, this list will provide all the information needed to identify the process VIs. However it must also be expandable to allow the inclusion of additional runtime parameters.
  2. A loop to process the launch list. This loop will step through the launch list one element at a time and perform whatever operations are needed to get each process running. As a first-pass effort, the launcher must be able to handle reentrant and non-reentrant processes.

Because the loop that does the actual launching is pretty trivial, we’ll concentrate our design effort on a discussion of the data structure needed to represent the launch list. Clearly, the list will need to vary in size, so the basic structure will need to be an array. Next, to properly represent or describe each process to be launched (as well as any runtime data it may require) will probably require a variety of different datatypes, so the structure inside the array will need to be a cluster — and a typedef’d one at that. It is inevitable that eventually this data will need to change.

To determine what information the cluster will need to contain, let’s look at what we know about the call process that we will be using. Because these processes will be operating independently from one another, the launcher will use the Asynchronous Call-and-Forget method. To make this technique work, we will need the path to the VI so the launcher can load it into memory, and something to identify how each VI should be run. Remember, we are wanting to support both reentrant and non-reentrant processes. While the launch processes that the two types of VIs require are very similar they are not identical, so at the very least we need to be able to distinguish between them. But what should the datatype of this selector be? Most folks would figure that the selector only has 2 possible states, and so they would make it a Boolean…

…and most folks would be wrong.

There are 3 reasons why a Boolean value is incorrect:

  1. It is inherently confusing. There is no natural or intuitive mapping between possible logical values of this parameter and the true or false states available with a Boolean.
  2. It runs counter to the logical purpose of a Boolean. Boolean controls aren’t for just any value that has two states. They are for values that can only have two states. This selector coincidentally happens to have two states.
  3. It will complicate future expansion. There are other possible situations where the technique for using a given process might vary from the two currently being envisioned. When that happens you’ll have to go back and change the parameter’s fundamental datatype — or do something equally ugly.

The correct solution is, of course, to make the value a typedef enumeration with two values, “Normal” and “Reentrant”.

Handling Paths

While we are considering the data details, we also need to talk a bit about the VI path parameter. At first blush, this might seem a straight-forward matter — and in the development environment it is. The challenge arises when you decide to build your code into a standalone executable. To understand why this is so, we need to consider a little history.

With LabVIEW versions prior to LabVIEW 2009, executables that you created stored files internally in what was essentially a flat directory structure. Hence if you wanted to build a path to a VI inside an executable it was just the path to the executable with the VI name added to the end, like so:

C:\myProgram\myProgram.exe\myProgram.vi

This simple structure made it very easy to build a path to a VI and worked well for many years. However, as the LabVIEW development environment became more sophisticated, this simple structure began to exhibit several negative side-effects. One of most pernicious of those effects was one that nearly everyone tripped over at least once. The path of a VI in an executable, while simple to build, was different from the path to the same VI in the development environment.

To fix this problem (and many, many, many others) LabVIEW 2009 defined a new internal structure that more closely reflects the directory structure present in the development environment. Thanks to this change, once you know the path to the “project directory” inside the executable, the logic to building a path to a VI is exactly the same, regardless of the environment in which it was running. The only trick now is getting to that internal starting point, and that can be a bit tricky depending on how you have VIs stored on your computer. Being basically a simple man, when approached with a requirement such as this I prefer an uncomplicated approach — and what could be less complicated than a functional global variable?

Because the launcher VI is always in the project root directory, writing its path (stripped of the VI name, of course) to a FGV provides a fast shortcut for any process or function that needs to build an internal path. Note that this expedient is only needed for LabVIEW files that are located in the executable. For non-LabVIEW files, the Application Directory node provides the same logical starting point for path building.

Managing Windows

Finally, if you implemented an application with all the features that we have discussed to this point you would have an application that would acquire data in the producer loop, and seamlessly pass it to the consumer loop. Unfortunately, you wouldn’t be able to what it was doing because there wouldn’t be any windows open.

By default, when you dynamically call a VI the target of the call doesn’t automatically open its front panel. This behavior is fine for the acquisition VI, because we want it to run unseen in the background anyway. However, it also means that you can’t see the data display. To make the display visible we need to tweak a few VI properties for the display VI. To make this sort of setting easy, the Window Appearance portion in the VI properties dialog box provides a couple, predefined window definitions. The one that will get you what we want is the top-most radio button labeled Top-level application window. This selection sets a number of properties such as turning off the scrollbars and setting the VI to automatically open its front panel when it runs and closing it when it finishes.

Note that the Window Appearance screen also allows you to specify a runtime title for the window — an optional but professional touch to add to you application. This setting only defines a static title, you can also set it dynamically from the VI using a VI property node.

So that’s about it for now. The next time we get together I’ll introduce how I have applied all the infrastructure we have been discussing to the “testbed” application introduced in the post on UDEs.

Until next time…

Mike…

Making UDEs Easy

For the very first post on this blog, I presented a modest proposal for an alternative implementation of the LabVIEW version of the producer/consumer design pattern. I also said that we would be back to talk about it a bit more — and here we are!

The point of the original post was to present the modified design pattern in a form similar to that used for the version of the pattern that ships with LabVIEW. The problem is that while it demonstrates the interaction between the two loops, the structure cannot be expanded very far before you start running into obstacles. For example, it’s not uncommon to have the producer and consumer loops in separate VIs. Likewise, as with a person I recently dealt with on the forums, you might want to have more than one producer loop passing data to the same consumer. In either case, explicitly passing around a reference complicates matters because you have to come up with ways for all the separate processes to get the references they need.

The way around this conundrum lies in the concept of information hiding and the related process of encapsulation.

Moving On

The idea behind information hiding is that you want to hide from a function any information that it doesn’t need to do its job. Hiding information in this sense makes code more robust because what a routine doesn’t know about it can’t break. Encapsulation is an excellent way if implementing information hiding.

In the case of our design pattern, the information that is complicating things is the detailed logic of how the user event is implemented, and the resulting event reference. What we need is a way to hide the event logic, while retaining the functionality. We can accomplish this goal by encapsulating the data passing logic in a set of VIs that hide the messy details about how they do their job.

Standardizing User-Defined Events

The remainder of this post will present a technique that I have used for several years to implement UDEs. The code is not particularly difficult to build, but if you are a registered subscriber the code can be downloaded from the site’s Subversion SCC server.

The first thing we need to do is think a bit and come up with a list of things that a calling program would legitimately need to do with an event — and it’s actually a pretty short list.

  1. Register to Receive the Event
  2. Generate the Event
  3. Destroy the Event When the Process Using it Stops

This list tells us what VIs the calling VI will need. However, there are a couple more objects that those VIs will be needed internally. One is a VI that will generate and store the event reference, the other is a type definition defining the event data.

Finally, if we are going to be creating 4 VIs and a typedef for each event in a project, we are going to need some way of keeping things organized. So let’s define a few conventions for ourselves.

Convention Number 1

To make it easy to identify what event VI performs a given function, let’s standardize the names. Thus, any VI that creates an event registration will be called Register for Event.vi. The other two event interface VI will, likewise, have simple, descriptive names: Generate Event.vi and Destroy Event.vi. Finally, the VI that gets the event reference for the interface VIs, shall be called Get Event Reference.vi and the typedef that defines the event data will be Event Data.ctl.

But doesn’t LabVIEW require unique VI names? Yes, you are quite right. LabVIEW does indeed require unique VI names. So you can’t have a dozen VIs all named Generate Event.vi. Thus we define:

Convention Number 2

All 5 files associated with an event shall be associated with a LabVIEW library that is named the same as the event. This action solves the VI Name problem because LabVIEW creates a fully-qualified VI name by concatenating the library name and the VI file name. For example, the name of the VI that generates the Pass Data event would have the name:
Pass Data.lvlib:Generate Event.vi
While the VI Name of the VI that generates the Stop Application event would be:
Stop Application.lvlib:Generate Event.vi

The result also reads pretty nice. Though, it still doesn’t help the OS which will not allow two files with the same name to coexist in the same directory. So we need:

Convention Number 3

The event library file, as well as the 5 files associated with the event, will reside in a directory with the same name as the event — but without the lvlib file extension. Hence Pass Data.lvlib, and the 5 files associated with it would reside in the Pass Data directory, while Stop Application.lvlib and its 5 files would be found in the directory Stop Application.

So do you have to follow these conventions? No of course not, but as conventions go, they make a lot of sense logically. So why not just use them and save your creative energies for other things…

The UDE Files

So now that we have places for our event VIs to be saved, and we don’t have to worry about what to name them, what do the VIs themselves look like? As I mentioned before, you can grab a working copy from our Subversion SCC server. The repository resides at:

http://svn.NotaTameLion.com/blogProject/ude_templates

To get started, you can simply click on the link and the Subversion web client will let you grab copies of the necessary files. You’ll notice that when you get to the SCC directory, it only has two files in it: UDE.zip and readme.pdf. The reason for the zip file is that I am going to be using the files inside the archive as templates and don’t want to get them accidentally linked to a project. The readme file explains how to use the templates, and you should go through that material on your own. What I want to cover here is how the templates work.

Get Event Reference.vi

This VI’s purpose is to create, and store for reuse, a reference to the event we are creating. Given that description, you shouldn’t be too surprised to see that it is built around the structure of a functional global variable, or FGV. However, instead of using an input from the caller to determine whether it needs to create a reference, it tests the reference in the shift-register and if it is invalid, creates a reference. If the reference is valid, it passes out the one already in the shift-register.

If you consider the constant that is defining the event datatype, you observe two things. First, you’ll see that it is a scalar variant. For events that essentially operate like triggers and so don’t have any real data to pass, this configuration works fine. Second, there is a little black rectangle in the corner of the constant indicating that it is a typedef (Event Data.ctl). This designation is important because it significantly simplifies code modification.

If the constant were not a typedef, the datatype of the event reference would be a scalar variant and any change to it would mean that the output indicator would have to be recreated. However, with the constant as a typedef, the datatype of the event is the type definition. Consequently you can modify the typedef any way you want and every place the event reference is used will automatically track the change.

Register for Event.vi

This VI is used wherever a VI needs to be able to respond to the associated event. Due to the way events operate, multiple VIs can, and often will, register to receive the same event. As you look at the template block diagram, however, you’ll something is missing: the registration output. The reason for this omission lies in how LabVIEW names events.

When LabVIEW creates an event reference it naturally needs to generate a name for the event. This name is used in event structures to identify the specific particular event handler that will be responding to an event. To obtain the name that it will use, LabVIEW looks for a label associated with the event reference wire. In this case, the event reference is coming from a subVI, so LabVIEW uses the label of the subVI indicator as the event name. Unfortunately, if the name of this indicator changes after the registration reference indictor is created, the name change does not get propagated. Consequently, this indicator can only be created after you have renamed the output of the Get Event Reference.vi subVI to the name that you wish the event to have.

The event naming process doesn’t stop with the event reference name. The label given to the registration reference can also become important. If you bundle multiple references together before connecting them to an event structure’s input dynamic event terminal, the registration indicator is added to the front of the event name. This fact has two implications:

  1. You should keep the labels short
  2. You can use these labels to further identify the event

You could, for example, identify events that are just used as triggers with the label trig. Alternately, you could use this prefix to identify the subsystem that is generating the event like daq or gui.

Generate Event.vi

The logic of this template is pretty straight-forward. The only noteworthy thing about it is that the event data (right now a simple variant) is a control on the front panel. I coded it this way to save a couple steps if I need to modify it for an event that is passing data. Changing the typedef will modify the front panel control, so all I have to do is attach it to a terminal on the connector pane.

Destroy Event

Again, this is very simple code. Note that this VI only destroys the event reference. If it has been registered somewhere, that registration will need to be destroyed separately.

Putting it all Together

So how would all this fit into our design pattern? The instructions in the readme file give the step-by-step procedure, but here is the result.

image

As explained in the instructions, I intend to use this example as a testbed of sorts to demonstrate some things, so I also modified the event data to be a numeric, and changed the display into a chart. You’ll also notice that the wire carrying the event reference is no longer needed. With the two loops thus disconnected from each other logically, it would be child’s play to restructure this logic to have multiple producer loops, or to have the producer loop(s) and the consumer loop(s) in separate VIs.

By the way, there’s no reason you can’t have multiple consumer loops too. You might find a situation where, for example, the data is acquired once a second, but the consumer loops takes a second and a half to do the necessary processing. The solution? Multiple consumer loops.

However, there is still one teeny-weensy problem. If you now have an application that consists of several parallel processes running in memory, how do you get them all launched in the proper order?

Until next time…

Mike…

Don’t re-code when you can re-configure

A common problem that you see being discussed on the LabVIEW user forums comes from people who hard-code configuration data in their programs. Often they will be complaining about things like how long it takes to maintain all the different versions of their application, when in truth the only difference between “versions” is the contents of a few constants on a block diagram. Ultimately, these folks may realize that putting configuration data is constants is not good, but they may feel like they are skilled enough, or don’t have enough time, to do anything better.

i suppose that the fundamental trouble is that it is way too easy to drop down a constant and type in some configuration data. Unfortunately, you can end up in a situation where you are constantly having to go back and manipulate or manage this “easy” solution. The good news however, is that it is not hard to come up with a really good way of handling configuration data.

Data Abstraction

The first step to really fixing the problem is to draw a distinction between two things that we typically hold as being the same. Namely, what a particular parameter represents or means, and the parameter’s value. What we need to do is develop a mechanism for obtaining parameter’s value based on what that value represents. Or to say it differently, we need to replace the constant containing a set parameter value with a VI that can return a value based on the parameter’s logical identity. I want to be able to say, for example, “What is the name of the directory where I should store the data files?” and have this VI return the parameter’s current value. Assuming that the configured value is stored outside the code itself, you can change that value to anything you want, anytime you want, without modifying a bit of code.

In terms of where to store this configuration that is outside the code, there are several options:

  • A configuration file on disk
  • Local or networked database
  • The Windows registry

Each of these options has a place but at this point the important point is that you have it somewhere outside the code. Once you have accomplished that, changing the data’s location from say an INI file to a database is comparatively trivial. With that point in mind, here is the code that I use to read data from the application’s ini file. I will put this code is a VI named for the parameter it is reading and use it in place of the constants that I would otherwise use. To make it easy to reuse, you might want to put it in a VI template file. The remainder of this post will concentrate on how the code works.

ini-based parameter reader

The first thing you’ll notice is that it takes the basic structure of a self-initializing functional global variable, or FGV. Remember that the point of this VI is to replace a constant, but you don’t want to have the code re-reading the configuration file every time that it is called. Using a FGV structure allows the code to automatically load the required data the first time it’s needed, and thereafter use the buffered value. Note that if the file ini file operations fail for whatever reason, the logic will also attempt to reread the file each time it is called. Feel free to remove this logic if you desire, but it can be useful.

Next, consider how I derive the path to the ini file. This logic is based on (and simplified by) the conventions that I follow for naming things. All the files associated with a project like my “Ultimate Data Logger” will reside in the directory:

c:/workspace/projects/Ultimate Data Logger

Likewise, the name of the project file itself will be Ultimate Data Logger.lvproj, the name of the top-level VI will be Ultimate Data Logger.vi, the name of the executable that is eventually built will be Ultimate Data Logger.exe and it will be installed in the directory Ultimate Data Logger on the target machine. Moreover, due to the way that Windows works, the name of the ini file associated with that executable must be Ultimate Data Logger.ini. Consequently, simply knowing the name of the application directory will tell me the name of the ini file. All I need to do is add the “.ini” file extension.

When using this logic, you will need to define the desired data value’s logical identity. In the context of this VI, that identity is defined by the Section and Key values on the block diagram. In this definition, the Section value should be seen as a broad category that includes several Keys or individual values. For example, you might have a section “file paths” that includes all the paths that your application needs so in that section you would find keys like “data file destination” or “import directory”.

Obviously, you will also need to change the datatype of the data being obtained from the ini file. This job is easy because the low-level configuration file VIs are all polymorphic and so will adjust to the datatype you wire to them. A couple of recommendations I would offer is to not store paths as paths, but rather convert them to strings and then save the string. Either way will work, but saving paths as strings produces a result that is more human-readable in the ini file. Likewise, when you are working with strings, there is an optional input on the VI that tells the function to save and read the data as “raw” strings. Always wire a true Boolean constant to these inputs.

Finally, note that the code includes the logic to create the section and key in the ini file, if they do not already exist. This is something that I have found useful, but if you don’t feel free to remove it as you see fit.

This basic structure works well for parameters that have single values, but what about if you have a parameter that is associated with a several values, like a cluster? That’s a situation we will deal with — real soon.

Until next time…

Mike…

The Opposite of Simple

It may not have occurred to you, but every application exhibits two forms of complexity. This dual nature of complexity can, at times, make it hard to keep straight what someone means when they start talking about “simplifying” an application.

First there is the complexity that is inherent in what you are trying to do. In that definition, notice that the word “inherent” is the important one. It means that there is no way of reducing that complexity without fundamentally changing the goal of your work.

Inherent Complexity in a Trip

As an example of what I mean, let’s go back to the summer between my 6th and 7th grades in school. That year my Dad had gotten a new job and we were moving to the (then) little town of Lebanon Missouri. As we were driving to our new home we stopped along the way at a Shell filling station and I noticed that the oil company was offering a new travel service. If you sent in a card with the departure date, starting point and destination of a trip, they would send you all the required maps and plan the trip for you — for free…

Being the kind of kid I was (and some would say, still am) I snagged one of the postage-paid cards and sent in a request for a trip from our new home in Lebanon, Missouri to Nome, Alaska. Why Nome? Why not Nome? Moreover I stated that the trip was to be made in October. After sending off the card, I thought no more about it until about 3 weeks later when a very large box arrived for me from “Shell Oil Travel Service”. Oh yes, did I mention that I hadn’t told my folks I was sending off the card?

Upon opening the box, I discovered highway maps for several US states and a couple Canadian provinces with my suggested route highlighted in blue — or at least most of my trip was in blue. Also in the box was a very nice letter from a travel planner stating that there weren’t actually any roads connecting Nome to the rest of the world, but if I desired more assistance they would be glad to help me identify a reliable bush pilot and/or dog sled owner who would help me complete the last leg of my journey.

Needless to say, my parents were not amused and, being the good parents they were, they made me write an apology to the planner identified in the letter.

The point is that a trip from Lebanon to Nome in October involves a certain level of complexity and the only way to reduce that complexity is to change the trip. Instead of going to Nome, perhaps I could travel to Seattle, Washington. Or instead of going in October, I could make the trip in June when the ocean wasn’t frozen and I at least could travel the last leg by ship.

Induced Complication

In addition to an application’s inherent complexity, there is also the issue of how complicated I choose to make the implementation of that complexity. Getting back to my “Alaskan Adventure”, if I made the trip is a brand-new 4-wheel-drive vehicle with good snow tires and extra fuel tanks, the journey would be long, but manageable. However, if I tried to make the journey in a beat-up ’53 Nash Rambler with bald tires and an engine that burnt as much oil as it did gasoline, the trip would be much more complicated.

The same thing applies to software. There is a myth floating around the industry that complex requirements require complicated code. Actually, I have found no real correlation between the two. Therefore, I particularly like the tagline the LabVIEW Champion Christian Altenbach has for his forum posts:

Do more with less code in less time

As you’re designing and developing code you must never forget the difference between inherent complexity and induced complication. You need to remember especially that in that last sentence, you are the inducer. You get to decide how complicated you make your code — don’t use complexity as an excuse for over-complication.

Until next time…

Mike…

Conventional Wisdom

The dictionary refers to a “convention” as a set or standardized way of doing something. In LabVIEW programming, conventions are an effective organizational tool that provides an avenue for solving many common problems. In fact there are far more opportunities for defining useful conventions than there is room in this post, so to get you thinking we’ll just cover some of the key areas.

Directory Conventions

The topic of directory structure is a good place to start this discussion because this area is often neglected by developers. As you might expect their primary focus is getting their code running — not where to store the code on disk. After all, one place is as good as any other, right?/p>

Uh, not so much, no… Storing files just anywhere can cause or facilitate a variety of problems. For example, one of the most common ones is that you can end up with multiple VIs sporting the same file name at various locations on disk. The best case scenario is that your application gets cross-linking to the wrong version of a file. The worst case situation is that your code gets cross-linked to a completely different file that just happens to have the same name. And if you’re really unlucky, the wrong file will have the same connector pane and IO as the one you want, so your code won’t even show up as being broken — it just stops working.

More problems can also arise when you decide to try to reuse the code on a new project or backup your work. If the files are spread all over your disk, how can you be absolutely sure you got everything? And the right version of everything? Of course, I have seen the opposite problem too. A developer puts EVERYTHING in one directory. Now you have a directory containing hundreds of files, so how do you find anything?

The correct answer is to go back to first principles and put a little effort into designing your development environment. So if you think about it, there basically two kinds of code you need to store: code that is specific to a particular project, and code that you are planning on reusing. Likewise, you want to minimize the code “changes” that are associated with a VI having been moved on disk. Consequently, I like to start at the root of a hard drive (doesn’t matter which one) and create a folder called Workspace.

image

This directory could actually be named anything, the point is that it will be the home of all my LabVIEW code. Inside this directory I create two sub-folders. The Toolbox sub-folder contains all my code that is intended for reuse. To keep it organized, its contents are further broken up into sub-folders by function type. In addition, I make it easy to get to my reusable code, by point the user.lib menu link for the LabVIEW function palette to this toolbox directory.

The other sub-folder inside my workspace directory is called Projects. It contains sub-folders for the various projects I am working on. Because I sometimes work for multiple clients, I also have my projects segregated by the year that I got the contract. If you work for a single company but support multiple departments, you might want to have separate folders for the various departments.

Please note that this directory structure is one that has worked well for me over the years but it isn’t written in stone. The main thing is that you be consistent so you (and people who in the future might be supporting your code) can find what is needed.

Naming Conventions

The next area where having some set conventions can pay big dividends is in the area of naming things. If you are a part of a larger organization, this is also an area where you should consider having a meeting to discuss the issue because it can have long-term effects of the way you and your group operate.

Online you will find a lot of recommendations as to how to name all sorts of things from entire projects to individual controls and indicators. The thing to remember is that some of these suggestions originated years ago and so are no longer appropriate — or necessary. For example, some people still advise you to prefix the name of a VI with the name of the project in which the VI is used. This technique, however, is counter-productive because it complicates the reuse of the VI. In addition, if you are making appropriate use of LabVIEW libraries (*.lvlib) it is unnecessary because the name of the library creates a namespace that, for LabVIEW’s internal use, is prepended to the VI’s file name.

Say for instance you have a library called Stop Application.lvlib and let’s further say that the library references a VI called Generate Event.vi. Now without the library, this VI name would be a problem because it is too non-specific. It is easy to imagine dozens of VIs that could have that same name. But because the VI is associated with a library, as far as LabVIEW is concerned, the name of that VI is Stop Application.lvlib:Generate Event.vi. By the way, you’ll notice that this composite name reads pretty nice for human beings too. The only remaining issue is that your OS doesn’t know about LabVIEW’s naming conventions, which is why I always put libraries (and the files they reference) in separate directories with the same name as the library, but minus the lvlib part.

Another common place where a naming convention can be helpful is in showing fixed relationships that LabVIEW doesn’t otherwise illustrate. A good use case for this type of naming convention would be with LabVIEW classes. Currently (as of LV2014) you can name a class anything you want but LabVIEW doesn’t organize the class lists in the project file to illustrate inheritance relationships. Until NI decides to fill this gaping hole, you can work around it by implementing a naming convention that (when sorted by name) shows the inheritance relationships. Starting with a top-level parent class, like database.lvclass, you build a hierarchical name for the subclasses by prepending the base name and an underscore character to the subclass name. In this case you might have two subclasses: one for ADO connections and one for SQLite (which uses its own dll). So you would create classes with the names database_ado.lvclass and database_sqlite.lvclass. Now within ADO-compatible databases there is a lot of commonality, but there can be differences so to cover those cases you might want to further define subclasses like database_ado_access.lvclass, database_ado_sqlserver.lvclass and database_ado_oracle.lvclass. Obviously this technique has limitations — like it has to be maintained manually, and you can end up with very long file names — but until NI addresses the basic issue, it is better than nothing.

Two other important conventions are that you should never use the same name to refer to two things that are structurally or different. Likewise, you should never use two different names to refer to the same object.

Style Conventions

At NIWeek this year, I had an interesting meeting. I had a young developer come up to me and asked me whether he was right in assuming that I had written a particular application. I said I had, but asked how he knew. He replied that he had worked on some code of mine on an earlier contract and said he recognized my coding style. For him it was an advantage because when making a modification he knew where to start looking.

Basically, style conventions define a standardized way of addressing certain common situations. For example, I make use of units for floating-point numbers to the greatest degree that I can because it simplifies code. One common situation that illustrates this point is the need to perform math on time values. If you want to generate a time that is exactly 1 day in the future you have a few options:

image

This works, but it assumes that the maintainer is going to recognize that 86,400 is the number of seconds in a day.

image

This is also a possibility and, thanks to constant folding, is just as efficient as the previous example. However it takes up an awful lot of space on the diagram and it essentially hardcodes a 1 day decrement — though that might not be clear at first glance either.

image

Personally, I like this solution because it literally expresses the math involved: Take the current time expressed in days, subtract one day from it and express the result in seconds. Remember that the point of good style is to make it clear what the code is logically doing. Great style makes it clear what you were thinking.

So that’s enough for now to introduce you to the concept of conventions. But be assured, it will come up again.

Until next time…

Mike…

Module, module, whose got the module?

One of the foundation stones of software architecture is the idea of modularity, or modular design. The only problem is that while the concepts of modular design are well-known, the term “module” seems to get used at times in rather inexact ways. So what is a module? Depending on who’s speaking it sometimes appears to refer to a VI, sometimes to a library or collection of VIs, and sometimes even a whole program or executable can be called a module. The thing is, all those are legitimate usages of the word — and they are just the beginning.

So how is one to keep straight what is meant? The only way that I have found is to stay alert, and keep straight in your head the conversation’s context. Well-designed software is always modular, but it is also hierarchical. Consequently, the modularity will also be hierarchical. For example, a VI can encapsulate some functionality that you wish to use through out a project (or across multiple projects) and so can be considered a module. But that VI can also be used as part of a higher-level more-abstract module that uses it to implement some broader system-level functionality.

To see this structure in action, all you have to do is consider any modern software package — including LabVIEW itself. Much of the user experience we think of as “LabVIEW” is actually the result of cooperation between dozens of independent modules. There are modules that form the user interface, modules that implement the compiler and other modules that manage system resources like licenses, network connections and DAQ IO. Regardless of level at which the modularity occurs, the benefits are much the same — reduced development costs, improved maintainability, lower coupling and enhanced cohesion. In short, modularity is a Very Good Thing.

However, you may have noticed that this description (like so many others) concentrates on modularization as a part of the implementation process. Consequently, I want to introduce another idea about modules: The use of modularization during the design process. In particular, I want to provide you with a link to a paper by D.L. Parnas that was published the year I graduated high-school, that I first read in 1989, and that every software engineer should read at least once a year because it is as relevant today as it was 43 years ago. The paper bears the rather daunting title: On the Criteria To Be Used in Decomposing Systems into Modules.

As the title suggests, the point is to not make the case for modularization — even then it was assumed that modularity was the way to go. The question was how do you go about breaking up a system into modules so as to derive the greatest benefit. To this end, Dr Parnas takes a simple application and compares two different ways that it could be broken down into modules, comparing and contrasting the results.

In a section titled What is Modularization? Dr Parnas sets the context for the following discussion:

In this context “module” is considered to be a responsibility assignment rather than a subprogram. The modularizations include the design decisions which must be made before the work on independent modules can begin.

In other words, this discussion is primarily about designing, not implementing. Just as the first step in designing a database is generating a data model, so the first step in designing a program is to identify where the functional responsibilities lie. And just as the data model for a database doesn’t translate directly into table structures, so this design modularization will not always translate directly into code. However, when done properly this initial design work serves a far greater good than simply providing you with a blueprint of the code you need, it also teaches you how to think about the problem you are trying to solve.

It can be easy to get tied-up in the buzzwords of the work we do and think that they (the buzzwords) will protect us from creating bad code like talismans or magical words of power. But there is no magic to be found in words like “object-oriented”, “hierarchical structure” or “modular design”. As Parnas shows, it is possible to create very bad code that is modular, hierarchical and object oriented.

Until next time…

Mike…

Build a Proper LabVIEW Producer/Consumer Pattern

It’s not for nothing that people who program a lot spend a lot of time talking about design patterns: Design patterns are basic program structures that have proven their worth over time. And one of the most commonly-used design patterns in LabVIEW is the producer/consumer loop. You will often hear it recommended on the user forum, and NI’s training courses spend a lot of time teaching it and using it.

The basic idea behind the pattern is simple and elegant. You have one loop that does nothing but acquire the required data (the producer) and a second loop that processes the data (the consumer). To pass the data from the acquisition task to the processing task, the pattern uses some sort of asynchronous communications technique. Among the many advantages of this approach is that it deserializes the required operations and allows both tasks (acquisition and processing) to proceed in parallel.

Now, the implementation of this design pattern that ships with LabVIEW uses a queue to pass the data between the two loops. Did you ever stop to ask yourself why? I did recently, so I decided to look into it. As it turns out, the producer-consumer pattern is very common in all languages — and they always use queues. Now if you are working in an unsophisticated language like C or any of its progeny, that choice makes sense. Remember that in those languages, the only real mechanism you have for passing data is a shared memory buffer, and given that you are going to have to write all the basic functionality yourself, you might as well do something that is going to be easy to implement, like a queue.

However, LabVIEW doesn’t have the same limitations, so does it still make sense to implement this design pattern using queues? I would assert that the answer is a resounding “No” — which is really the point that the title of this post is making: While queues certainly work, they are not the technique that a LabVIEW developer who is looking at the complete problem would pick unless of course they were trying to blindly emulate another language.

The Whole Problem

The classic use case for this design pattern includes a consumer loop that does nothing but process data. Unfortunately, in real-world applications this assumption is hardly ever true. You see, when you consider all the other things that a program has to be able to do — like maintain a user interface, or start and stop itself in a systematic manner — much of that functionality ends up being in the consumer loop as well. The only other alternative is to put this additional logic in the producer loop where, for example, asynchronous inputs from the operator could potentially interfere with your time-sensitive acquisition process. So if the GUI is going to be handled by the consumer loop, we have a couple questions to answer.

Question: What is the most efficient way of managing a user interface?

Answer: Control events

The other option of course is to implement some sort of polling scheme, which is at the very least, extremely inefficient. While it is true that you could create a third separate process just to handle the GUI, you are still left with the problem of how to communicate control inputs to the consumer loop — and perhaps the producer too. So let’s just stick with events.

Question: Do queues and control events play well together?

Answer: Not so much, no…

The problem is that while queues and events are similar in that they are both ways of communicating that something happened by passing some data associated with that happening, they really operate in different worlds. Queues can’t tell when events occur and events can’t trigger on the changes in queue status. Although there are ways of making them work together, the “solutions” can have troublesome side effects. You could have an event structure check for events when the dequeue operation terminates with a timeout, but then you run the risk of the GUI locking up if data is put into the queue so fast it never has the chance to time out. Likewise, you could put the dequeue operation into a timeout event case, but now you’re back to polling for data — the very situation you are wanting to avoid.

Thankfully, there is a simple solution to the problem: All you have to do is lose the queue…

The Solution

The alternative is to use something that integrates well with control events: user-defined events (UDEs). “But wait!”, you protest, “How can a UDE be used to transfer data like this? Isn’t the queue the thing that makes the producer-consumer pattern work?” Well, yes. But if you promise to keep it under your hat, I can tell you a little secret: Events have a queue too.

The following block diagram shows a basic implementation of this technique that mirrors the queue-based pattern that ships with LabVIEW.

event-driven producer-consumer

Note that in addition to the change to events, I have tweaked the pattern a bit so you can see its operation a bit better:

  1. The Boolean value that stands in for the logic that determines whether to fire the event, is now a control so you can test the pattern operation.
  2. The data transferred is no longer a constant so you can see it change when new data is transferred.
  3. Because the consumer loop is now event-driven, the technique for shutting down the program is actually usable (though not necessarily optimum) for deliverable code.

For those who are only familiar with control events, the user-defined event is akin to what back in the day was called a software interrupt. The first function to execute in the diagram creates the event and defines its datatype. The datatype can be anything, but the structure shown is a good one to follow due to a naming convention that LabVIEW implements. The label of the outer cluster is used as the name of the event — in this case acquire data. Anything inside the loop becomes the data that is passed through the event. This event, as defined, will pass a single string value called input data.

The resulting event reference goes to two places: the producer loop so it can fire the event when it needs to, and a node that creates an event registration. The event registration is the value that allows an event structure to respond to a particular event. It is very important that the event registration wire only have two ends (i.e. no branches). Trying to send a single registration to multiple places can result in very strange behavior.

Once this initialization is done, operation is not terribly different from that of the queue-based version. Except that it will shut down properly, of course. When the stop button is pressed, the value change event for the button causes both loops to stop. After the consumer loop stops, the logic first unregisters and then destroys the acquire data UDE.

And that, for now at least, is all there is to it. But you might be wondering, “Is it worth taking the time to implement the design pattern in this way just so the stop button works?” Well, let’s say for now that the stop button working is a foretaste of some good stuff to come, and we will be back to this pattern, but first I need to talk about a few other things.

Until next time…

Mike…