Skip to main content
Home Support

Spectral Cameras

A target is imaged by first determining the height of the scene that the spectral imaging system is exposed to. This is determined by the focal length of the lens, the imaging slit width and the distance to the target. If our height works out to be 0.5 mm, we must take a spectral frame, move either the imaging spectrometer or the target 0.5mm then take the next spectral frame, repeating this process until the entire scene has been imaged. If we move less than 0.5mm we will be oversampling the scene, repeating the data gathered from a single point. If we move further than 0.5mm we will be undersampling, missing data from the target we are imaging.

This is a simple question with a complicated answer. The imaging speed is determined by:

·The sensitivity of the camera and the illumination of the target (lower sensitivity or lower light levels require longer integration times)

  • The data transfer capabilities of the camera
  • The pixel depth (an 8 bit pixel is ½ of the data of a 10 or 12 bit pixel)
  • The transfer speed of the camera to computer interface (CameraLink is fast, USB is much slower)
  • The computer’s ability to process the incoming data

A fast camera with lots of light can produce more than 100 full spectral frames per second.This means if the imaging height of the scene is 0.5 mm you can image more than 50mm/sec. Handling the data at the computer becomes a problem as this 100 frames/second results in 50 megabytes of data per second that needs to be processed. A number of compromises can be made including decreasing the bit depth or the spectral or spatial resolutions depending on the application.

The spectral resolution of the imaging spectrograph is defined by the optics of the prism or grating mechanism and the entrance slit width of the device. The light entering the system is defracted into its components according to wavelength. For example Specim’s ImSpector V10-e provides a spectral resolution of 2.8nm with a 30um slit width (depending on the detector and optics). 

Some CCD and CMOS detectors have a thin coating on the detector surface causing interference phenomena (like Newton rings) that are seen as horizontal waves. This is an aesthetic problem only and does not interfere with spectral imaging. 

The light source has an infrared cutoff filter or the fiber optics absorb the light. The camera is equipped with an infrared cut-off filter (hot mirror). The detector has low response (low QE) above 700 nm. The front objective coatings are not designed for above 700 nm.

The light source (usually halogen) does not produce much energy at the short wavelengths. The camera detector has low response at the short wavelengths. There is a lens coating on the front objective or a UV blocking filter present.

The back focal length of the lens is incorrect for the camera (may not be C-mount). The lens is not focused on your target. Use a focus target to set the focus. The objective lens is loose or incorrectly installed. The objective lens is not suited for spectral imaging (low quality, wrong wavelength range, coatings).

Is the lens cap on the objective lens?
Is the lens aperture open?
Do you have adequate integration time?
Do you have an incompatible light source ie is there an IR cutoff filter?
The target has high absorption/low reflectance.

A data cube is simply a collection of sequential spectral frames placed back to back.If we imaged a target with our 1024 pixel x 1024 pixel imaging spectrograph using an imaging height of 0.5mm and took 200 images the dimensions of our cube would be frames x pixel width x pixel height or 200x 1024x1024. 

A spectral frame is the image captured by the imaging spectrograph. The horizontal dimension or row is spatial. The field of view is defined by the focal length of the objective lens, distance to the target and width of the sensor. This field of view is then divided into the number of pixels of horizontal resolution. The vertical dimension is spectral. Each column of pixels placed on the sensor represents the intensity of light reflected from a portion of the thin slice of the target at a particular wavelength.

A waterfall image is simply a spatial image taken from our data cube. If we take an slice from the data cube in theframe x pixel width (the two spatial dimensions) we will get a recognizable image of the target at a particular wavelength.

An imaging spectrograph transforms a very thin slice of an image into its spectral components by using a prism, grating or both and projects the spectral information onto an imaging sensor, typically a scientific CCD or CMOS camera. 

Spectral imaging is a combination of imaging and spectroscopy, where a complete spectrum is collected at every location of an image plane. This powerful technique is sometimes called hyperspectral or multispectral imaging. Spectral imaging is not restricted to visible light, but works from ultraviolet to infrared. Wikipedia offers a very good overview of hyperspectral imaging: http://en.wikipedia.org/wiki/Hyperspectral_imaging

Spectroscopy captures the entire spectrum, light intensity as a function of wavelength. It's this very detailed spectral response curve that gives spectral imaging the ability to discriminate specific chemicals and elements. The unique reflections and absorbances are the signature of the compound.

When a spectral camera images a scene, the frame can be considered to be three dimensional. What the viewer sees when viewing the image is the two dimensional spectral frame which is defined by the area of the detector. This frame typically has data for each pixel of the camera.What must be remembered is that this is the spectral image of an area defined by the optics of the spectral camera. If the scene being imaged is 0.5mm as an example, each pixel can be considered a 3d cube defined as pixel height x pixel width x scene height. If the scene height and the pixel width are not equal, a waterfall image which is simply a slice taken through the data cube will present a rectangular pixel defined as scene height x pixel width. When this image is presented on a screen with square pixels, the image will appear to be “compressed”, even though the data is completely valid.

Incorrect calibration – the spectral lines from a reference source have not been correctly identified. You can use a simple fluorescent table lamp to indentify spectral lines. The camera detector is too small, misaligned or not centered. There are calculation errors.

Machine Vision

Camera Temperature

The electronic devices used in digital cameras are typically specified to run at up to 70° C (158° F). If the temperature inside of the camera rises above that point, the devices may be damaged or destroyed. This is true for all types of electronic cameras. 

An electronic camera usually contains a CCD sensor and several types of analog and digital devices. For the analog devices, as their temperature increases, electrical noise in the devices increases noticeably. Digital devices have the same characteristic but with digital signal interpretation, noise is not usually an important factor. The performance of the CCD sensor is also influenced by heat and the image produced by the sensor tends to get noisier as temperature increases. 

As you can see, with the potential for component damage and poor performance caused by heat, it makes great sense to keep the components inside of the camera as cool as possible. What can be done to keep the devices inside of a camera cool? 
 

1. Design the camera so that there is good heat flow from the electronic devices to the camera housing. Good heat flow between the components and the housing allows the heat to be dissipated to the atmosphere around the camera rather than being captured inside of the camera. Good heat flow between the components and the housing also helps external cooling devices such as heat sinks, fans and Peltier coolers work better. Basler engineers took great care to design a good energy connection between the components in our cameras and the camera housings. This connection brings the heat to the housing surface so that cooling will be can be more effective.
2. Keep the camera’s power consumption low. Power input to the camera must equal power output from the camera (this is known as energy balance). Cameras usually only have two ways for power to exit: through the data connection to the frame grabber and as heat. So basically, any power that is not used by the digital drivers for LVDS communication will be converted to heat. Basler engineers have carefully selected the electronic devices used in our cameras so that power consumption is at a minimum.
3. Use external cooling. This is something that the camera user can do.

Basler cameras are typically specified for operation at up to 40° C (104° F). At higher temperatures, a heat sink, fan, Peltier cooler or a similar device must be used to cool the camera.

Signal-to-Noise Ratio

An ideal camera sensor would convert a known amount of light into an exactly predictable output voltage. Unfortunately, ideal sensors (like all other electronic devices) do not exist. Due to temperature conditions, electronic interference, etc., sensors will not convert light 100% precisely. Sometimes, the output voltage will be a bit bigger than expected and sometimes, it will be a bit smaller. The difference between the ideal signal that you expect and the real-world signal that you actually see is usually called noise. The relationship between signal and noise is called the signal-to-nose ratio (SNR). 

Signal-to-noise ratio is commonly expressed as a factor such as 20 to 1, 30 to 1, etc. Signal-to-noise ratio is also frequently stated in decibels (dB). The formula for calculating a signal-to-noise ratio in dB is: SNR = 20 x log (Signal/Noise). 

Once noise has become part of a signal, it can’t be filtered or reduced. So it is a good idea to take precautions to reduce noise generation such as: 
 

1. using good quality sensors and electronic devices in your camera
2. using a good electronic architecture when designing your camera
3. lowering the temperature of the sensor and the other analog devices in your camera
4. taking precautions to prevent noisy environmental conditions from influencing the signal (such as using shielded cable)

Many times, camera users will increase the gain setting on their cameras and think that they are improving signal-to-noise ratio. Actually, since increasing gain increases both the signal and the noise, the signal-to-noise ratio does not change significantly when gain is increased. Gain is not an effective tool for increasing the amount of information contained in your signal. Gain only changes the contrast of an existing image. 

PRNU

When a fixed, uniform amount of light falls on the sensor cells in a digital camera, each cell in the camera should output exactly the same voltage. However, due to a variety of factors including small variations in cell size and substrate material, this is not actually true. When a uniform light is shined in the cells in a digital camera, the cells output slightly different voltages. This difference in response to a uniform light source is referred to as “Photo Response Non-Uniformity” or PRNU for short. Since PRNU is caused by the physical properties of the sensor itself, it is almost impossible to eliminate. PRNU is usually considered to be a normal characteristic of the sensor array used in a camera. 

One easy way to deal with PRNU is to use a look up table (LUT). With this method, the sensor cells in a camera are exposed to uniform light and an adjustment factor that would result in a uniform output is calculated for each sensor cell. The adjustment factor for each cell is stored in a table. When an image is captured, a software routine looks in the table and applies the appropriate correction factor to the output from each cell. 

PRNU can be made worse if the gain on your camera is set too low or if your exposure time is set too high (usually > 500 ms).

DirectX

Microsoft DirectX represents a collection of technologies established by Microsoft. It provides accelerated direct and uniform access to the video and audio hardware installed on a system. Since the introduction of Windows 98, DirectX has been an integral component of all Microsoft operating systems. You can view more detailed, official information about DirectX on the Microsoft website at: http://msdn2.microsoft.com/en-us/library/ms786508(VS.85).aspx
 

DirectShow

DirectShow is the generic term for those parts of the DirectX API that control the behaviour of video and audio data streams. Since the release of Windows 98, DirectShow has been established as a standard for multimedia oriented image processing. DirectShow enables any DirectShow compliant device (e.g., a camera) to operate with any DirectShow compliant software (e.g., DirectShow based image processing libraries such as MontiVision. See: http://www.montivision.com).
 

WDM

WDM is the abbreviation for the Windows Driver Model. The WDM describes a model driver architecture for Windows 2000 and its successors. A WDM streaming driver controls the processing and transport of streaming data at the operating system level (“kernel mode”) instead of at the application level (“user mode”).
 

The Pylon DirectShow Filter

If you install either the Basler pylon runtime package (can be downloaded free of charge our website. Click here for the download.) or the Basler pylon SDK, a pylon DirectShow Filter Driver will automatically be installed on your system. Even though this filter driver is not actually a WDM driver (because it operates in user mode), it can be used in any situation where a WDM driver is required. 

This means, that you can interface any Basler GenICam compliant camera that has a FireWire or GigE Vision interface with any WDM / DirectShow compliant application! Compliant cameras include: 
 

1. the A102f/fc
2. the A311f/fc and A312f/fc
3. the A601f/fc through the A641f/fc
4. all scout cameras
5. all pilot cameras
6. all runner cameras

The pylon DirectShow filter provides a GenICam feature tree that is very similar to the feature tree in the pylon Viewer application. The feature tree lets you access and control all of the features on all supported cameras. 

When an application (for example, the pylon Viewer) establishes a connection to a Basler GigE Vision camera a so-called “control channel” is established. A channel is a virtual link used to transfer information between a device and an application. The control channel is used by an application to communicate with a device. Only one pylon based application is allowed to establish a control channel at a time. Such an application is called the “control application”. 

A heartbeat mechanism is used to allow a camera to detect if its control application is alive. A control application must periodically access the camera. If the camera doesn’t recognize an access within a configurable period of time, the camera will close the connection and become ready to accept new connections. The period of time is defined by the heartbeat timeout parameter, which is set to 3000 milliseconds by default. 

The pylon GigE library implements a thread which ensures that the camera is periodically accessed. When a debugger is used to break into an application, the heartbeat thread is paused, and this would normally cause the camera to close the connection after 3 seconds. To avoid this situation, the debug version of the pylon GigE library sets the heartbeat to 5 minutes by default. This means that if an application is terminated by using the debugger, or if it just crashes, the control channel will be keep open for 5 minutes and the only way to access the camera during that period is to power it off and on again. 

As an easier alternative, Basler provides a software tool called the C4Tool (Camera Control Channel Close Tool) that lets you close the control channel in a software manner. The C4Tool requires the WinPcap library (the Windows Packet Capture Library), and the library must be installed before using the C4Tool. 

Use this link to download a zip file that contains an installer for the WinPcap library and an installer for the C4Tool: Download

Basler provides two different high level C++ APIs for interfacing with our IEEE 1394 (FireWire) and GigE Vision cameras: the classic BCAM 1394 API and the GenICam compliant pylon API

The table below shows which API supports which camera interface and uses which compiler.
 

               
    BCAM 1.8 BCAM 1.9 pylon 1.0 pylon 2.0 pylon 2.1  
  Supported Camera Interfaces 1394a 1394a
1394b*
1394a
1394b*
GigEVision
1394a
1394b**
GigEVision
1394a
1394b**
GigEVision
 
  Visual C++ 6.0
(part of Visual Studio 6.0)
Y Y N N N  
  Visual C++ 7.0
(part of Visual Studio 2002.NET)
Y Y N N N  
  Visual C++ 7.1
(part of Visual Studio 2003.NET)
Y Y Y Y Y  
  Visual C++ 8.0
(part of Visual Studio 2005)
Y*** Y*** Y Y Y  
  Visual C++ 9.0
(part of Visual Studio 2008)
Y*** Y*** Y**** Y**** Y  
 

* S400 speed on Windows XP SP2 requires the addition of patch KB885222. S800 speed on Windows XP SP2 requires a partial driver rollback to SP1

** S400 and S800 speeds without limitations. No patch or rollback required. 

*** Some minor source code modifications are required. For more information, click here

**** Unofficially supported for Visual C++ 9.0.

Basler provides two GigE Vision network drivers for interfacing Basler GigE Vision cameras: 
 

1. The Basler filter driver is a basic GigE Vision network driver that is compatible with all network adapters. The advantage of the filter driver is its extensive compatibility.

 

2. The Basler performance driver is a hardware specific GigE Vision network driver. The performance driver is only compatible with network adapters that use specific Intel Pro 1000 chipsets (“compatible chipsets”). The advantage of the performance driver is that it significantly lowers the CPU load needed to service the network traffic between the PC and the camera(s). It also has a more robust packet resend mechanism.

To take advantage of the benefits of the Basler performance driver, we recommend using Intel Pro 1000 network adapters. These adapters generally work well with the performance driver. However, since the Intel Pro 1000 series has changed over the time, it may happen that the Basler performance driver does not support your particular Intel Pro 1000 adapter. 

To make sure that the Pro 1000 network adapter you are using is compatible, consult the table below that lists the currently supported Intel Pro 1000 chipsets and their corresponding Hardware IDs. Note that some chipsets are compatible with pylon version 2.1, but not with version 2.0. 

In the following table: 

1. Yes = is compatible
2. Yes/M = is compatible but requires manual installation
3. No = is not compable

 

               
    Intel Pro 1000 Chipset Hardware ID                     Pylon  2.0  Pylon  2.1    
    82540EM PCI\VEN_8086&DEV_100E        Yes Yes    
    82540EP_EL PCI\VEN_8086&DEV_101E Yes Yes    
    82541GI PCI\VEN_8086&DEV_1076 Yes Yes    
    82541GI_LF PCI\VEN_8086&DEV_107C Yes Yes    
    82545EM PCI\VEN_8086&DEV_100F Yes Yes    
    82545GM PCI\VEN_8086&DEV_1026 Yes Yes    
    82563EB/80003ES2 PCI\VEN_8086&DEV_1096 No Yes    
    82567/ICH9_IGP_AMT* PCI\VEN_8086&DEV_10BE No Yes/M    
    82567/ICH9_IGP_AMT* PCI\VEN_8086&DEV_10F5 No Yes/M    
    82571EB PCI\VEN_8086&DEV_105E Yes Yes    
    4-port<br>(2 X 82571EB) PCI\VEN_8086&DEV_10A4 Yes Yes    
    4-port LP<br>(2 x 82571EB) PCI\VEN_8086&DEV_10BC Yes/M Yes    
    82572EI PCI\VEN_8086&DEV_10B9 Yes Yes    
    82572EI-Copper PCI\VEN_8086&DEV_107D Yes Yes    
    82573E PCI\VEN_8086&DEV_108B Yes Yes    
    82573E-IAMT PCI\VEN_8086&DEV_108C Yes Yes    
    82573L PCI\VEN_8086&DEV_109A Yes Yes    
    82574L PCI\VEN_8086&DEV_10D3 No Yes    
 

* pylon operation with this chipset is unreliable 

To check the Hardware IDs for your network adapter: 

1. Click Start > Run. 

2. Type in: devmgmt.msc 

3. Click the OK button. The device manager will start. 

3. Expand the node for Network Adapters. 

4. Right click on the name of your Intel Pro 1000 adapter 
and select Properties from the drop down menu.
 

5. Click the Details tab and make sure that Hardware IDs is selected in the drop down list.
 

Check the hardware IDs in the list on the Detail Tab against the table that appears earlier in this FAQ. If hardware IDs for your adapter do not match an ID in the table, you must use the Basler filter driver with your network adapter. 

Customers who happen to acquire an unsupported Intel Pro 1000 network adapter, but who still need to use the Basler performance driver, can contact the Basler support team. The support team will arrange shipment of the non-compatible network adapter to Basler AG and will attempt to get the adapter supported either by creating a hotfix for the performance driver or by including the adapter in the next Basler performance driver release. 

On PCs with a Windows™ OS, if you configure a Basler GigE camera for a persistent (fixed) IP address and the address is in the range normally reserved for “multicast” IP addresses (224.0.0.0 to 239.255.255.255), the camera will not be discoverable by pylon, even when you use the pylon IP Configuration Tool. This situation occurs because Windows rejects all incoming IP packets from any device (such as a GigE camera) with an IP address in the multicast range. You can find some good basic information about IP multicast on Wikipedia. 

Click here to open the Wikipedia article. 

The pylon IP Configuration Tool works by sending UDP broadcast messages to all attached cameras and waiting for the cameras to answer. But since Windows rejects packets from devices with IP addresses in the multicast range, answers from any camera with an IP address in the range will never reach the configuration tool. 

With the current package of pylon tools, there is no way to discover a camera with an IP address set in the multicast range. However, the Basler pylon API does provide a method for accessing a camera by its MAC address and for forcing a change to the camera’s IP address (the FORCEIP_CMD). This will let you set the camera back to a state where it is discoverable. 

A programming sample is available that illustrates how to use the pylon C++ API to set the camera’s IP configuration. It also illustrates how to use the Force IP command. The programming sample is based on the Basler pylon 2.0 C++ SDK and to build the entire sample project, you must have the pylon 2.0 SDK installed on your PC. The sample also includes a prebuilt “Simple IP Configuration Tool” executable which will run on PCs that have either the pylon SDK or Basler’s free pylon 2.0 runtime package installed. 

You can use the link below to download the programming sample:
 

IP Configuration Sample - zip, 2.7 MB

And you can click here to access the pylon 2.0 runtime package download on the Basler website. 

Powering IEEE 1394 Cameras When Using a Laptop 

If you are using a laptop, you must make sure that the camera is being properly supplied with power. Many of the built-in IEEE 1394 connectors on laptops and the connectors in add-on IEEE 1394 PCMCIA cards only have a 4 pins instead of the normal 6 pins. The two missing pins are for the wires used to supply power to the camera. If your laptop’s built-in connector or the connector on its PCMCIA card has only 4 pins, it will not supply power to the camera and your camera will not work. In this case, you can purchase an adapter cable from Basler with extra power input, switch to a card that does supply power to the camera or you can add a powered IEEE 1394 hub to your system. 

CCD vs. CMOS

CCD sensors use devices called shift registers to transport charges out of the sensor cells and to the other electronic devices in the camera. The use of shift registers has several disadvantages: 
 

1. Shift registers must be located near to the photosensitive cells. This increases the possibility of blooming and smearing.
2. The serial nature of shift registers makes true area of interest image capture impossible. With shift registers, the readings from all of the sensor cells must be shifted out of the CCD sensor array. After all of the readings have been shifted out, the readings from the area of interest can be selected and the remaining readings are discarded.
3. Due to the nature of the shift registers, large amounts of power are needed to obtain good transfer efficiency when data is moved out of the CCD sensor array at high speed.

CMOS sensors and CCD sensors have completely different characteristics. Instead of the silicon sensor cells and shift registers used in a CCD sensor, CMOS sensors use photo diodes with a matrix oriented addressing scheme. These characteristics give CMOS the following advantages: 
 

1. The matrix addressing scheme means that each sensor cell can be accessed individually. This allows true area of interest processing to be done without the need to collect and then discard data.
2. Since CMOS sensors don’t need shift registers, smear and blooming are eliminated and much less power is needed to operate the sensor (approximately 1/100th of the power needed for a CCD sensor).
3. This low power input allows CMOS sensors to be operated at very high speeds with very low heat generation.

The quality of the signals generated by CMOS sensors is quite good and can be compared favorably with the signals generated by a CCD sensor. Also, CMOS integration technology is highly advanced; this creates the possibility that most of the components needed to produce a digital camera can be contained on one relatively small chip. Finally, CMOS sensors can be manufactured using well-understood, standardized fabrication technologies. Standard fabrication techniques result in lower cost devices. 

Basler scout cameras are available with a DCAM compliant IEEE 1394b (FireWire b) interface or with a GigE Vision interface. When using scout cameras, you must first decide which camera interface to design into your application. 

Some detailed information about GigE Vision can be found in the White Paper downloads section of our website. Information about CPU load and latency can also be found there. Click here to go to the white paper downloads. 

The table below presents an overview of the most important differences between the two interfaces: 

 

         
    GigEVision IEEE 1394b  
  Cable length Up to 100 meters per cable. Multiple cables can be connected using switches or repeaters, thus allowing a virtually unlimited cable length. Standard CAT6 Ethernet cable can be used. Up to 4.5 meters per cable. With special cables and a repeater, the max. cable length can be extended up too 14.5 meters. Standard FireWire cables can be used.  
  Power supply Camera power must be supplied via a power connector on the camera. An additional cable for the power supply is required. Camera power is supplied over the 1394 cable  
  Broadcasting commands(one application sends the same command, e.g., a software trigger, to all cameras connected to one network/bus) Currently not supported, but in preparation Fully supported  
  Max. available bandwidth Up to 1 gigabit per sec 1394a - up to 400 megabits per sec 1394b - up to 800 megabits per sec  
  CPU load(the CPU load caused by streaming data from the camera to the PC is highly dependent on the PC hardware and camera settings) Low (5-10% is typical) Even lower (> 1%)  
  Latency when sending a command from the PC to the camera(e.g., a software trigger) 600-900 µs (jitter of about 30%) 350-500 µs (jitter of about 30%)  
  Latency when sending image data from the camera to the PC(when image will be present in PC memory is highly dependent on the PC hardware and camera settings) 13-14 ms 13-14 ms  
  Accessory hardware All required hardware is available as standard, industrially proven devices:

1. Gigabit Ethernet network interface connector (also called the Ethernet card, NIC, or network adapter) is most likely already part of the PC
2. Switch (if needed to connect multiple cameras to one NIC)
3. Standard CAT6 Ethernet cable
The required hardware is typically derived from the consumer market:

1. FireWire adapter card is required
2. Standard FireWire cables
3. Hub (if needed to connect multiple cameras to one FireWire card or to extend the max. cable length)
4. Repeater (if needed to extend the max. cable length)
 
 

Summary:

For new projects where supplying camera power over the interface cable is NOT a must, we recommend using the GigE Vision interface for the following reasons: 

1. Virtually unlimited cable length
2. Robust, attractively priced, industrially proven accessory hardware
3. Higher max bandwidth compared to 1394b

Avoiding Data or Video Corruption

Corrupted data or video when using a notebook’s IEEE 1394 port can be caused by too much latency in the C3 power state transition, which results in buffer under runs. In other words, the interrupts associated with the processor’s ability to dynamically change speeds conflict with the high demand on the processor that occurs when streaming video or data across the IEEE 1394 port. When operating a Basler 1394 camera at high data rates, this behavior can be observed, for instance, as a jittering test image when using the BCAM viewer. We found this behavior with several different notebook computers including Dell, Toshiba, Acer, Gericom, and others. The problem may disappear or may become less noticeable when a USB device such as a mouse or a memory stick is connected to the notebook PC

A better workaround, however, is to change the C3 latency setting in the Windows® Registry. You can either change the setting manually or you can download and run a file that will make the change automatically. 

Caution! The following procedures contain information about editing the Windows registry. Basler does not guarantee success or support these actions. Any use of the information provided herein is performed at your own risk. You should make a backup copy of the registry files before executing any of the following steps. Incorrect use of the registry editor and editing the registry files can cause serious problems that may require a complete reinstall of your operating system. Basler assumes no responsibility, expressed or implied, regarding the consequences of any action taken as a result of the information provided herein. 

Changing the Setting Manually 

To change the C3 latency setting in the Windows registry, perform the following steps: 

1. Click the Start button and then click Run. 

2. In the Open box, type: regedit 

3. Click OK. The registry editor will open. 

4. In the registry editor, locate the following key: 

     HKEY_LOCAL_MACHINE 
SYSTEM 
CurrentControlSet 
Control 
Processor 

5. In the Processor key, right-click CStateFlags and select Modify. 

6. Change the Value data to 8. 

7. Exit the Registry Editor. 

Note: The Processor key may not exist. If it does not exist, you will need to create it. To create the key, perform the following steps: 

1. Right-click the Contol key and select New and Key from the menu. 

2. Rename the new key as: Processor. 

3. Click on the Processor key. From the Edit menu, select New and DWORD value. 

4. Rename the new DWORD value to: CStateFlags. 

5. Right-click CStateFlags and select Modify. 

6. Change the Value data to 8. 

7. Exit the Registry Editor. 

After you make any changes to the registry, you must restart the computer for the changes to take effect. 

Making the Changes by Running a File 

As an alternative to manually changing the registry, you can download the file below and run it to make the changes. If the Processor key is not present in the registry, running this file will create the key and will set its value. If the Processor key already exists, running this file will simply change the value of the existing key.
 

C3State.reg (1Kb - Automatic Registry Editor)

After you run the file, you must restart the computer for the registry changes to take effect. 

After installing the Basler BCAM 1394 SDK on your computer, you will notice that we provide some code samples for Visual Studio 6.0 (containing the VC++ 6.0 C++ compiler) and for Visual Studio 2002.NET (containing the VC++ 7.0 C++ compiler). Newer Microsoft C++ compilers have been released since these code samples were created. The behavior of the compilers has changed, especially with the new VC++ 8.0 compiler included in the new Visual Studio 2005.NET. This makes some modifications in the BCAM SDK source code and in the workspace project settings necessary if you want to build the SDK with this most recent Microsoft C++ compiler. This article describes the steps needed to build all BCAM SDK samples with the native C++ compiler (VC++ 8.0) of Visual Studio 2005.NET

1. Install the most recent Windows Template Library (WTL 7.5). 
It can be downloaded from the WTL page in the 
Microsoft download center

2. Remove write protection for BcamApiMfc7.vcproj. 
Due to a minor bug in our setup script, the “…\Basler\BCAM 1394 Driver\src\BcamApi\Mfc7\BcamApiMfc7.vcproj” file might be write protected. Visual Studio 2005.NET must modify this file, so please remove the write protection. 

3. Open the SDK with Visual Studio 2005.NET
Start Visual Studio 2005.NET and select File > Open > Project/Solution in the menu. Switch to the “…\Basler\BCAM 1394 Driver” folder and select BcamSamples.sln for opening. The Visual Studio Conversion Wizard will automatically start and will perform some necessary modifications for you. It will also ask you about creating a backup. Backups are always a good idea. 

4. Set the include and lib paths. 
In the Tools >Options >Project and Solutions >VC++ Directories menu, add the “…\Basler\BCAM 1394 Driver\inc” path and the WTL7.5 include directory (see step 1) to Visual Studio’s include paths and “…\Basler\BCAM 1394 Driver\lib” to Visual Studio’s lib paths. 

5. Rebuild the required static libraries. 
The prebuilt BCAM libraries can no longer be used with VC++ 8.0 and you must rebuild them. If you rebuild the libraries, you will notice some compiler errors because Microsoft changed the behavior of the VC++ 8.0 compiler to be more ANSI C++ compliant. The reason for the errors is always the same - a variable that was declared inside of a for-loop is now no longer valid outside of the loop. This can easily be fixed if you declare and initialize this variable before the for-loop. If you do not want to patch the BCAM source code, ask the Basler VC Support Team for the appropriate source code patches. 

6. Rebuild the samples (except BcamViewer). 
You should now be able to rebuild the BCAM samples. You will notice compiler errors similar to those mentioned above and these errors can be fixed in the same way. If you don’t want to fix the code on your own, the Basler VC Support Team will help you with code patches. 

7. Rebuild the BcamViewer. 
Rebuilding the BcamViewer requires a bit more C++ experience to patch the sources. We recommend that you ask the Basler VC Support Team for the appropriate source code patches.

For some cameras, running the camera at a line rate near zero is not a problem.

For example:

  • L304k, L304kc, L400k, and L800k cameras have no minimum required line rate when an external trigger (ExSync) signal is used to trigger line acquisition. 
    Keep in mind that for proper operation, the exposure time should be at least 10% of the line period. And if these cameras are used in free run, there is a 10 Hz minimum line rate.

  • Runner cameras have no minimum line rate when an external line start trigger (ExLineStTrig) signal is used to trigger line acquisition. 
    However if an external line start trigger is not used, there is a minimum 100 Hz line rate.

 

On some cameras, there is an absolute minimum line rate.

For example:

  • L100k, L301k, L301kc and sprint cameras all have a minimum line rate of 1KHz.

 

If your PC has a Windows XP operating system with Service Pack 2 and you have an IEEE 1394b (FireWire) adapter card installed, you will notice that any FireWire-b device attached to the PC, e.g., a Basler scout camera, will reduce its transmission rate to S100 (100 Mbits/s) speed. This happens even though the 1394b device should be capable of transmitting data at rates up to 800 MBits/s (S800). 

The speed reduction is the result of a limitation that was introduced in Service Pack 2. To regain the ability to run at full S800 speed on a PC equipped with Windows XP and Service Pack 2, you must perform a partial service pack rollback, i.e., you must roll back certain 1394 drivers to their Windows XP Service Pack 1 version. Section 2 of the “Scout-f with BCAM API User’s Manual” explains in detail how to do the partial rollback. To download the manual, please go to the download page for scout user manuals.

Saving Image Data Deeper Than 8 Bits as a TIFF Using Libtiff with Visual Studio 7

This FAQ describes how to save 16 bit monochrome image data to a conforming TIFF with libtiff. Once the image is saved, you can do further image post processing with other applications like ImageJ. Libtiff is open source software, but is not GPL

Integrating Libtiff into your Solution 

First, you must get libtiff. I suggest that you download Version 3.7.2 of the original sources from: ftp.remotesensing.org/pub/libtiff/old/. When your download is complete, copy the libtiff sources (\tiff-3.7.2\libtiff) into your solution directory (\libtiff). There are two configuration files with pre-processor definitions: tif_config.h and tiffconf.h. Since we are working with Visual Studio (on an Intel/AMD platform I assume), rename: 
 

  tiffconf.h    to   tiffconf.h.old
  tiffconf.h.vc    to   tiffconf.h
  tif_config.h.vc    to   tif_config.h

Next, set up a Win32 project named libtiff with a static library and no precompiled header. 
NOTE: It is not absolutely necessary for you follow this part of the FAQ in all of its particulars. If you need to vary the procedure (e.g., by building a dynamic link library instead of a static library), then do so. You can also use the attached make-file to create a library. That’s just the way I made it and it worked. 

Now add all the libtiff-sources to your library, except for the dispensable OS-specific ones: 
tif_acorn.c, tif_apple.c, tif_atari.c, tif_msdos.c, tif_stream.cxx, tif_unix.c, tif_win3.c 

To access libtiff support in your code, add libtiff-path to include-paths, include <tiffio.h>, and link your application against the created libtiff library. 

The Sample Code: Save/load Image Data to/from your HDD 

I’ve prepared some sample code to illustrate working with 16 bit images. Click the link below to download the sample.
 

Libtiff Sample Code - zip, 274 KB

A TIFF (Tagged Image File Format) file consists of anterior tags that describe the following image data whether it is compressed or it is raw (this FAQ only deals with raw image data). So the method to save image data needs to know the destination filename, a pointer to the image data, the dimension, and the data depth that are assigned to the image data. 

bool save16mono ( const char * szFilename, 

       const unsigned short * pImageBuf,
       const unsigned int width,
       const unsigned int height,
       const unsigned int bppReal );

The method to load the image data only needs to know which file to load to which memory area and tells you in which dimension and at which data depth the image was stored. 

bool load16mono ( const char * szFilename, 

       unsigned short * pImageBuf,
       size_t SizeImageBuf, // for buffer overrun prevention
       unsigned int * pWidth,
       unsigned int * pHeight,
       unsigned int * pBppReal );

The TIFFSetField()/TIFFGetField() routines give you the ability to write/read such a tag (e.g. TIFFTAG_IMAGEWIDTH-tag for image’s width). 
The TIFFWriteScanline()/TIFFReadScanline() routines provide writing/reading an image line to/from a TIFF file. 
Both of these routines need a TIFF* file handle, which TIFFOpen() delivers. The semantic is just like the one for fopen()/fclose(). 

My sample code first generates a 10 bit deep image gradient (400 x 300 pixels, pExampleBuf) and then saves the image data, loads the image into another buffer (pLoadBuf), and compares the generated and loaded buffers to each other. In some cases, the image data you want to save/load is not necessarily 16 bits deep (it could be anywhere between 8 bits and 16 bits). So, for example, if you have 10 bit image data for one pixel

 

Now the largest part of the image data is visible. This illustrates that sometimes before saving the image data, the saving method must shift the data (see \CodeSample\main.cpp:75): 

// shift data, if necessary 
int SHL = bppSave-bppReal; 
if( SHL>0 ) 

     for( unsigned int c=0; c<(width*height); c++ ) 
     { 
           pTmpImgBuf[c] = (pTmpImgBuf[c] << SHL); 
     } 

To remember how many bits were shifted at saving, we must save the saving bit depth (in the TIFFTAG_BITSPERSAMPLE) as well as the image’s data bit depth (in the TIFFTAG_IMAGEDEPTH). 

Different image viewers/editors handle the mono 16 bit TIFF images in different ways. The Microsoft™ Picture and Fax Viewer, for example, will not display images saved like this. Also, KDE’s Kuickshow doesn’t handle files like this properly. ImageJ does. Image J is a powerful image analyzer written in JAVA™ and it handles TIFF images in mono16 correctly. If you are reliant on a special tool that doesn’t handle your saved files properly, I would assume that your first step is the tags the tool requires. A list of all tags and possible values can be found in \libtiff\tiff.h. 

3 Chip vs. 1 Chip Color

Three chip color cameras always contain a prism which divides the incoming light rays into their red, green and blue components. Each chip then receives a single color at full resolution. 

One chip area scan cameras use a single sensor that is covered by a color filter with a fixed, repetitive pattern. Filters with several different patterns are used but the Bayer color filter is the most common. The illustration to the right shows a portion of the Bayer filter. When a color filter is used with a single sensor, each individual cell in the sensor gathers light of only one particular color. To reconstruct a complete color image, an interpolation is needed. The red, green and blue information is interpolated across several adjacent cells to determine the total color content of each individual cell. 

One chip line scan cameras use a sensor that has three rows of cells, a red rwo, a green row and a blue row. As an area on an object moves past the camera, the area is examined first by the cells in the red row, second by the cells in the green row and third by the cells in the blue row. The information from the red, green and blue cells is then combined to produce a full color image.
 

3-Chip Color Advantages:
 

1. Full resolution RGB Images
2. Easier software handling of the data output

3-Chip Color Disadvantages: 
 

1. High camera cost due to the need for a prism and three sensor chips
2. Large camera housing needed for prism and sensors
3. Typically require expensive, special optics
4. High weight

 

1-Chip Color Advantages:
 

1. Much less expensive
2. Smaller size
3. Lower weight

1-Chip Color Disadvantages: 
 

1. For area scan cameras, an interpolation algorithm must be run to achieve full color resolution
2. For line scan cameras, spatial correction must be done to combine the color data from the three sensor rows

 

When deciding on a three chip or a one chip camera, you must consider the advantages and disadvantages of each and determine which type is most appropriate for your application. Experience shows that in many cases, a one chip camera is more than adequate and is the cost efficient solution.

Check your network adapter settings. 

Go to Start>Control Panel>Network Connections and right click on your network adapter. Select Properties from the drop down menu. When the properties window opens, click the Configure button. Select the Advanced tab and in the property box on the left, select the property called “Jumbo Frames”. Set the value as high as possible (for jumbo frames, it’s approximately 16KB). 

Be aware that if your adapter doesn’t support jumbo frames, you might not be able to operate your camera at the full frame rate.

The Moiré Effect

The word “moiré” was first used by weavers and it derives from the word mohair, a kind of cloth made from the fine hair of an Angora goat. The physical nature of moiré lies in the interference between two or more regular structures with different spatial frequencies. You can see this effect in real life as you walk past two fences located one behind the other or when you look at folded sheer stockings. 

The Mathematik.com web site has a very good animation that shows the moiré effect. Click here to see the animation. 

The moiré effect can produce interesting and beautiful geometric patterns. In the machine vision world, however, the phenomenon can degrade the quality and resolution of captured images. It can occur when the image from a camera is reproduced on a computer display and then rendered in a screened or dot-matrix format. The fine matrix of dots in the original image almost always conflicts with the matrix of dots in the reproduction. This generates a characteristic criss-cross pattern on the reproduced image. Moiré patterns can also be created by plotting a series of curves on a computer screen. In this case, the interference is caused by the rasterization of the finite-sized pixels.

When you capture an image with a Basler camera, it is digitized by a matrix of photosensitive elements. Each photosensitive element in the matrix is a discrete unit and the elements are arranged in a regular pattern. If the captured image also has a very regular pattern (e.g., a jacket with a fine fabric weave), that pattern will be superimposed on the matrix of photo elements. The superimposition of the pattern in the captured image on the pattern in the photo element matrix can cause interference and may result in moiré patterns. 

There are several different techniques for limiting moiré effects: 
 

1. If your captured images show a heavy moiré effect, try rotating the camera or the object. Experiment with the angle of rotation to achieve the minimal moiré pattern.
2. Change the position of the camera. A simple change in the camera angle can result in significant moiré reduction.
3. Change the focus. The moiré effect is most noticeable in images with high sharpness. Overlapped fine details boost the effect.
4. Try a lens with a different focal length. This can reduce the moiré effect.

In some cases, you may not be able to completely eliminate the moiré effect by using these techniques. But they should at least give you a noticeable improvement in your results. 

BCAM Installation Problems

After a new installation of the BCAM driver or after updating an existing BCAM driver installation to the current version, users sometimes encounter problems. These instructions will help you solve the most typical problems. The instructions are written for Windows XP, but if you are operating a system with Windows 2000, the solutions work in a very similar manner. 

If you are using a laptop: 

If you are using a laptop, please make sure that the camera is being properly supplied with power. Many IEEE 1394 PCMCIA cards only have a 4 pin connector instead of the normal 6 pin connector. The two missing pins are for the wires used to supply power to the camera. If your PCMCIA card has only 4 pins, it will not supply power to the camera and your camera will not work. In this case, you can either switch to a card that does supply power to the camera or you can add a powered IEEE 1394 hub to your system. 

Common Error Messages: 

After installing the driver or an update, you may see one of these error message when you open the BCAM viewer: 

“The Versions of the BCAM API(1.6) and BCAM Driver(1.8) don’t match” 

BCAM is not compatible with driver 0.0” 

“Die Netzwerkanforderung wird nicht unterstuetzt” 

All of these messages have the same origin, i.e., the camera driver is not properly associated with your camera. To correct this error, you must change the driver association. 

To begin the process, click Start > Control Panel > System > Hardware > Device Manager. A Device manager window will open.

The correct BCAM driver is now associated with your camera. If you open the BCAM viewer, you will see the Basler camera installed and you can grab images. Have Fun! 

Some users of Basler’s Pylon 2.0 software may want to build their own applications without the need to buy Microsoft Visual C++ Studio. Application notes are available that describe a way to build pylon based applications for free using Microsoft Visual C++ Express and the Microsoft Platform SDK.

Click here to download the application notes.

Application notes are available that provide a detailed description of how to interface Basler GigE cameras with VisionPro 5.1 software from Cognex. 

Check your network adapter settings. 

Go to Start>Control Panel>Network Connections and right click on your network adapter. Select Properties from the drop down menu. When the properties window opens, click the Configure button. 

Look for a tab with a name such as “Connection speed”. If you see a tab like this, select the tab and set the “Speed & Duplex” property to “Automatic identification” or “Auto”. 

If you do not see a “Connection Speed” tab, select the Advanced” tab and look for the “Speed & Duplex” property. Set the “Speed & Duplex” property to “Automatic identification” or “Auto”.

You must configure the IP address of your network adapter and your camera. Generally, there are two approaches to configuring a network adapter: “Fixed Address” or”DHCP / Alternate Configuration = APIPA (Automatic Private IP Addressing)”. 

To configure your network adapter, please follow the procedure described in the scout-g User’s Manual

To change your camera’s IP configuration, you can use the Basler IP Configuration Tool. This tool was automatically installed when you installed the pylon Viewer. Detailed information about using the IP Configuration Tool is included in the scout-g User’s Manual

When configuring your camera’s IP address, keep the following guidelines in mind: 
 

1. For a camera to communicate properly, it must be in the same subnet as the adapter to which it is connected.

 

2. The camera must have an IP address that is unique within the network.

 

3. The recommended range for fixed IP addresses is from 172.16.0.1 to 172.32.255.254 and from 192.168.0.1 to 192.168.255.254. These address ranges have been reserved for private use according to IP standards.

Tip: There’s a convenient “trick” that is handy during your initial camera design-in process or when working with cameras in your lab. You can set your network adapter to a fixed address in the automatic IP address range (169.254.0.0 to 169.254.255.255) with a subnet mask of 255.255.0.0 and you can set your camera(s) for automatic IP address assignment. With these settings, a camera and an adapter can establish a network connection very quickly. This can save you some time if you are frequently connecting and disconnecting cameras or switching the system on and off as you would during design-in. 

The set of parameters that a Basler camera uses for camera control is known as the “work set” of parameters. When you use the Basler BCAM 1394 Driver or another tool to change parameter values, you are changing the settings in the work set. The work set resides in the camera’s volatile memory and is lost at power off or camera reset. 

Most Basler 1394 cameras have a feature that lets you save the current work set into one of three “memory channels”. A set of parameters saved in one of the memory channels is known a “user set”. The memory channels are part of the camera’s internal, non-volatile memory and the user sets stored in the memory channels will be retained when the camera is powered off or reset. No memory channel data will be stored on your PC

Each camera also has a set of factory parameter settings stored in a separate memory channel. The memory channel containing the “factory set” can’t be overwritten. 

You can save up to three different user sets in the camera’s memory channels. If you desire, you can load one of the saved user sets from a memory channel into the camera’s work set. And if you want to restore the camera to its initial factory settings, you can simply load the camera’s factory set into the work set. 
 

By default, the camera always loads the factory set into the work set at power up or reset. This default behavior can be changed by using the camera’s “startup memory channel” smart feature. This feature lets you designate one of the three stored user sets as the parameter set that will be loaded into the work set at power up or camera reset. 

You can use the link below to download a zip file containing a small tool I call the “BCAM User Set Configurator”. This tool will help you to perform the actions described above. The file also contains the source code for the tool to show you how you can program these features using the BCAM SDK and the SFF.

YUV Color Coding

A CCD or a CMOS sensor alone is not able to detect the color of incident light. In reality, each pixel in the sensor simply detects the intensity of the incident light. But when a color pattern filter is applied to the sensor, each pixel becomes sensitive to only one color - red, green or blue. The following table shows the color arrangement of a “Bayer Pattern” filter on a sensor with a size of X x Y (with X and Y being multiples of 2). 

Since the arrangement of the colors in the Bayer pattern filter is known, a PC can use the raw information transmitted for the pixels to interpolate full RGB color information for each pixel in the sensor. Instead of using the raw sensor information, however, it is more common to use a color coding known as YUV. The block diagram below illustrates the process of conversion inside a Basler IEEE 1394 camera. To keep things simple, we assume that the sensor collects pixel data at an 8 bit depth.

As a first step, an algorithm calculates the RGB values for each pixel. This means, for example, that even if a pixel is sensitive to green light only, the camera gets full RGB information for the pixel by interpolating the brightness information from adjacent red and blue pixels. This is, of course, just an approximation of the real world. There are many algorithms for doing RGB interpretation and the complexity and calculation time of each algorithm will determine the quality of the approximation. Basler IEEE 1394 color cameras have an effective built-in algorithm for this RGB conversion. 

A disadvantage of RGB conversion is that the amount of data for each pixel is inflated. If a single pixel normally has a depth of 8 bits, after conversion it will have a depth of 8 bits per color (red, green and blue) and will thus have a total depth of 24 bits. 

YUV coding converts the RGB signal to an intensity component (Y) that ranges from black to white plus two other components (U and V) which code the color. The conversion from RGB to YUV is linear, occurs without loss of information and does not depend on a particular piece of hardware such as the camera. The standard equations for accomplishing the conversion from RGB to YUV are: 

Y = 0.299 R + 0.587 G + 0.114 B 
U = 0.493 * (B - Y) 
V = 0.877 * (R - Y) 

In practice, the coefficients in the equations may deviate a bit due to the dynamics of the sensor used in a particular camera. If you want to know how the RGB to YUV conversion is accomplished in a particular Basler camera, please refer to the camera’s user manual for the correct coefficients. This information is particularly useful if you want to convert the output from a Basler IEEE 1394 camera from YUV back to RGB

The diagram below illustrates how color can be coded with the U and V components and how the Y component codes the intensity of the signal.

This type of conversion is also known as YUV 4:4:4 sampling. With YUV 4:4:4, each pixel gets brightness and color information and the “4:4:4” indicates the proportion of the Y, U and V components in the signal. 

To reduce the average amount of data transmitted per pixel from 24 bits to 16 bits, it is more common to include the color information for only every other pixel. This type of sampling is also known as YUV 4:2:2 sampling. Since the human eye is much more sensitive to intensity than it is to color, this reduction is almost invisible even though the conversion represents a real loss of information. As defined in the DCAM specification, YUV 4:2:2 digital output from a Basler camera has a depth that alternates between 24 bits per pixel and 8 bits per pixel (for an average bit depth of 16 bits per pixel). 

As shown in the table below, when a Basler camera is set for YUV 4:2:2 output, each quadlet of image data transmitted by the camera will contain data for two pixels. In the table, K represents the number of a pixel in a frame and one row in the table represents a quadlet of data transmitted by the camera.

For every other pixel, both the intensity information and the color information are transmitted and this results in a 24 bit depth for those pixels. For the remaining pixels, only the intensity information is preserved and this results in an 8 bit depth for them. As you can see, the average depth per pixel is 16 bits. 

On all Basler IEEE 1394 color cameras, you are free to choose between an output mode that provides the raw sensor output for each pixel or a high quality YUV 4:2:2 signal. Due to the high bandwidth that would be needed to provide full RGB output at 24 bits/pixel, Basler IEEE 1394 color cameras do not provide RGB output. 

 

Color Filters for Single-Sensor Color Cameras 

In general single-sensor color cameras use a monochrome sensor with a color filter pattern. Another way to achieve a color image with only one sensor would be to use a revolving filter wheel in front of a monochrome sensor, but this method has its limitations. 

With the color filter pattern method of color imaging, no object point is projected on more than one sensor pixel, that is, only one measurement (for a single color or sum of a set of colors) can be made for each object point. 

There are several different filter methods for generating a color image from a monochrome sensor. In the following some frequently used filter arrangements are detailed. 

Bayer Color Filter (Primary Color Mosaic Filter) 

The following table 1 shows the filter pattern for a sensor of size xs x ys (xs and ys being multiples of 2):

The following table 2 shows the filter pattern for a sensor of size xs x ys (xs and ys being multiples of 2):

This is basically the same arrangement as the Bayer filter pattern, but instead of using primary colors (R, G, B) it works with complementary colors (magenta, cyan, yellow). The reason for this is that a primary color filter blocks of 2/3 of the spectrum (i.e. green and blue for a red filter) while a complementary filter blocks only 1/3 of the spectrum (i.e. blue for a yellow filter). Thus, the sensor is 2 times more sensitive. The tradeoff is a somewhat more complicated computation of the R, G, B values, requiring the input of each complementary color. 

Primary Color Vertical Stripe Filter 

Table 3 shows the filter pattern for a sensor of size xs x ys (xs being a multiple of 4):

This arrangement is very simple and basically well suited to machine vision applications. The drawback is that the horizontal resolution is only 1/3 of the vertical resolution.

Binning in CCD Cameras

Binning increases the camera’s sensitivity to light by summing the charges from adjacent pixels in the CCD sensor into one pixel. There are three types of binning available: horizontal binning, vertical binning, and full binning. 

With horizontal binning , pairs of adjacent pixels in each line of the sensor are summed (see the drawings below). With vertical binning, pairs of adjacent pixels from two lines in the sensor are summed. Full binning is a combination of horizontal and vertical binning in which four adjacent pixels are summed. 

Using horizontal or vertical binning generally increases the camera’s sensitivity by up to two times normal. Full binning increases sensitivity by up to four times normal. On some camera models, using horizontal or full binning increases the camera’s maximum frame rate (this is not true for all cameras and depends on the architecture of the sensor used in the camera). 

With horizontal binning active, horizontal image resolution is reduced by half, for example, if a camera’s normal horizontal resolution is 1300, with horizontal binning active, this would be reduced to 650. With vertical binning active, vertical image resolution is reduced by half, for example, if a camera’s normal vertical resolution is 1030, with vertical binning active, this would be reduced to 515. When full binning is used, both horizontal and vertical resolution are reduced by half.

FireWireTM 

FireWire is a standardized serial communications bus similar to USB that allows digital devices to talk to one another at high speed. FireWire operates at a maximum speed of 400 Megabits per second and can handle up to 63 connected devices such as hard drives, monitors, printers, computers and cameras. A FireWire system has no need for a host controller; each device on the system can operate on its own but must follow strict rules about when it is allowed to talk. 

Because FireWire is standardized, all FireWire compliant devices should easily plug and play. FireWire has also been designed to allow hot plug and unplug. Since each type of FireWire compliant device is assigned a worldwide identification number, there is little possibility of identification conflicts within the system. 

FireWire was initially developed at Apple Computer and Apple still retains the FireWire trademark. The Institute of Electrical and Electronics Engineers formalized the rules for communication on a FireWire bus in a document called the IEEE 1394-1995 specification and you will often hear FireWire referred to as IEEE 1394. The IEEE 1394 document defines the electronic and software protocols used to transmit data over a FireWire system and also specifies the format of the cabling and connectors used with FireWire compliant devices. 

The FireWire bus system is much different from the data interface that we use now. Currently, one camera interfaces with one frame grabber and communication between the two is optimized for this one-to-one relationship. With a FireWire bus system, many devices can share the communication line. To avoid conflicts between the devices, strict rules are needed to determine which device can talk and when it can talk. The IEEE 1394 specification provides the rules to ensure that communication between the devices on the FireWire bus takes place in an orderly fashion. 

Sensitivity

The response curve for a light sensitive sensor can be divided into three parts: the dark area, the linear area and the saturation area. A typical response curve is shown in the graph below. 

The dark area of the response curve shows the sensor’s response to very low light. The output of the sensor in the dark area is very low, is noisy and is unpredictable. As you gradually increase the light falling on a sensor, you will find a point where the output of the sensor begins to increase predictably as the amount of light increases. This point is called the Noise Equivalent Exposure (NEE). 

After the NEE point is reached, the output of the sensor becomes linear. The output remains linear until a point called the Saturation Equivalent Exposure (SEE) is reached. At this point, increasing the light intensity results in a nonlinear increase in the sensor output. 

The gradient of the linear portion of the sensor’s response curve is commonly referred to as sensitivity and is usually measured in V/µJ/cm2. The higher a sensor’s output voltage is for a given amount of light, the higher its sensitivity. 

But when you are discussing sensors, talking about sensitivity alone does not make sense. For one thing, NEE is also very important. Since a sensor with a high NEE will be blind at low light levels, NEE should be as low as possible. 
Another point to consider is that a digital camera is a system and that sensor sensitivity is just one of the factors involved in the output signal from the camera. Electronic devices in the camera such as Analog to Digital converters and amplifiers also influence the output signal. At Basler, we feel that a camera’s “responsivity” is a better measure of camera performance. We also think that since our cameras are digital, responsivity should be stated as DN/µJ/cm2 (DN stands for digital number). The graph below shows a responsivity curve. 

If a camera provides a gain feature as most of them do, responsivity will depend on the gain setting. And responsivity really only makes sense when it is stated in combination with a measurement of the camera’s noise such as peak-to-peak, signal-to-noise ratio. 

Let’s consider an example. Suppose that you are comparing two cameras and that they have the following specifications: 

  Camera One: Responsivity = 1 DN/µJ/cm2 Noise = 2 DN (peak-to-peak) 
  Camera Two: Responsivity = 2 DN/µJ/cm2 Noise = 5 DN (peak-to-peak) 

At first glance, camera two seems better than camera one because its responsivity is higher. However if camera one has a gain feature, we can adjust the gain and increase the responsivity to two. Keep in mind that if we adjust the gain to double the responsivity from one to two, we will also double the noise. Now we have this situation: 

  Camera One: Responsivity = 2 DN/µJ/cm2 Noise = 4 DN (peak-to-peak) 
  Camera Two: Responsivity = 2 DN/µJ/cm2 Noise = 5 DN (peak-to-peak) 

Which camera is better? They now both have the same responsivity, but camera one has lower noise. Camera one would be the better choice. 

The lesson to be learned from all of this is that sensor sensitivity alone does not tell the entire story and that we should be sure to use similar measuring criteria when we are comparing cameras.

Area of Interest Feature

Many of Basler’s area scan cameras include an area of interest (AOI) feature. The AOI feature lets the user specify a portion of the camera’s sensor array and during operation, only the pixel information from the specified portion of the array is transmitted out of the camera. 

The main advantage of the AOI feature is that as you decrease the height of the AOI, there is usually an increase in the camera’s maximum allowed frame rate. In other words, when you capture smaller images, you can capture more images per second. This can be very useful in an application where you need to capture smaller images at higher speeds. 

Be aware that on most area scan cameras with an AOI feature, decreasing the AOI height will result in a higher maximum allowed frame rate - but this is not true for every camera model. Also, on some camera models the maximum allowed frame rate will increase when both the AOI height and the AOI width are decreased. You should consult the user’s manual for your camera model to learn the specific details of the AOI feature on your camera.

RGB Color Space

Because the human eye only has color sensitive receptors for red, green and blue, it is theoretically possible to decompose every visible color into combinations of these three “primary colors.” Color monitors, for instance, can display millions of colors simply by mixing different intensities of red, green and blue. It is most common to place the range of intensity for each color on a scale from 0 to 255 (one byte). The range of intensity is also known as the “color depth.” 

The possibilities for mixing the three primary colors together can be represented as a three dimensional coordinate plane with the values for R (red), G (green) and B (blue) on each axis. 

If you have multiple network adapters in a single PC, keep the following guidelines in mind: 
 

1. Only one adapter in the PC can be set to use auto IP assignment. If more than one adapter is set to use auto assignment, auto assignment will not work correctly and the cameras will not be able to connect to the network. In the case of multiple network adapters, it is best to assign fixed IP addresses to the adapters and to the cameras. You can also set the cameras and the adapters for DHCP addressing and install a DHCP server on your network.

 

2. Each adapter must be in a different subnet. The recommended range for fixed IP addresses is from 172.16.0.1 to 172.32.255.254 and from 192.168.0.1 to 192.168.255.254. These address ranges have been reserved for private use according to IP standards.

 

3. If you are assigning fixed IP addresses to your cameras, keep in mind that for a camera to communicate properly with a network adapter, it must be in the same subnet as the adapter to which it is attached.

You have just installed the pylon Viewer and you’re trying to acquire your first images, but it just doesn’t work. 

Unlike the BCAM Viewer, installing the pylon Viewer by itself is not enough. Depending on the type of GigE network adapter in your computer, you must also install either the Basler Filter Driver or the Basler Performance Driver. The filter driver and the performance driver are network drivers for pylon. Before you can acquire images, one of these drivers must be installed. 

The Basler Filter Driver can be used with all common GigE network adapters. The Basler Performance Driver is appropriate for use with network adapters that have specific Intel chipsets (e.g., the Intel Pro/1000 series). 

Keep in mind that before installing a new version of the filter driver, you must make sure that you don’t have any old versions of it on your system. If you do have an older version of the filter driver installed, you must remove it before installing a newer version of the driver. For more details regarding the installation and removal procedure, please refer to the scout-g User’s Manual.

Basic Camera Principles 

The principle of how a camera works is that during line exposure, photons from a light source strike the pixels in the camera’s sensor and generate electrons. At the end of each line exposure, the electrons collected by each pixel are transported to an analog-to-digital converter. For each pixel, the converter provides a digital output signal that is proportional to the number of electrons collected by the pixel. 

Below Minimum Line rates 

If a camera is triggered at a rate below the specified minimum, it is much easier to fall into an over exposure situation. This happens due to an effect called “shutter inefficiency”. The electronic shutter on digital cameras is not 100% efficient, and the pixels in the camera will collect some photons even when the shutter is closed. At very low line rates, you have long periods of time between exposures when the shutter is closed but the pixels are still collecting some photons and generating electrons. When the electrons collected with the shutter closed are added to the electrons collected during an exposure, the electrons can flood the electronics around the pixel. 

After an Over Exposure 

After an over exposure or with a trigger rate below 1kHz, it takes several readout cycles to remove all the electrons from the pixels and the electronics. For this reason, gray values will be abnormally high during the first several readouts after an over exposure. 

Solutions

Use a camera that can operate at line rates near zero such as the L304k, L304kc,   L400k, and L800k

or,

If you use a camera with a higher specified minimum line rate:

  • Don’t operate the camera below its minimum specified rate.
  • Design an application which accepts a few lines that are brighter than normal.
  • Run the camera in free-run mode and collect only the lines that you need.
  • Send dummy trigger signals to the camera and ignore the lines generated by the dummy triggers.

Still Need Help?

Here you will find help and support for Machine Vision, Infrared, and Security/Surveillance solutions provided by Channel Systems. Information is seperated into product families to help you find the information you need. If you are unable to solve your problem with the information provided here, please contact us for futher assistance.

Contact