PyVISA
PyVISADocs

Memory Management

How to manage memory in PyVISA applications. Context managers, data type selection, bounded buffers, and leak detection for long-running tests.

PyVISA scripts that run for hours (production testing, data logging) accumulate memory unless you manage it. The patterns below prevent the common leaks.

Use Context Managers

The simplest way to prevent resource leaks. The instrument closes even if your code raises an exception.

import pyvisa

rm = pyvisa.ResourceManager()

with rm.open_resource("TCPIP::192.168.1.100::INSTR") as scope:
    scope.timeout = 10000
    waveform = scope.query_binary_values("CURVE?", datatype="h")
    print(f"Captured {len(waveform)} points")
# scope.close() called automatically

rm.close()

For multiple instruments, nest or use a helper.

import pyvisa
from contextlib import contextmanager

@contextmanager
def open_instruments(resource_strings):
    """Open multiple instruments, close all on exit."""
    rm = pyvisa.ResourceManager()
    instruments = []
    try:
        for rs in resource_strings:
            inst = rm.open_resource(rs)
            inst.timeout = 10000
            instruments.append(inst)
        yield instruments
    finally:
        for inst in instruments:
            inst.close()
        rm.close()

with open_instruments([
    "TCPIP::192.168.1.100::INSTR",
    "USB0::0x2A8D::0x0101::MY53220001::INSTR",
]) as (scope, dmm):
    voltage = float(dmm.query("MEAS:VOLT:DC?"))
    print(f"Voltage: {voltage:.4f} V")

Log measurements automatically

TofuPilot records test results from your PyVISA scripts, tracks pass/fail rates, and generates compliance reports. Free to start.

Pick the Right Data Type

Choosing a smaller dtype saves memory proportionally. For a 10M-point waveform:

TypeBytes/sample10M samples
int16219 MB
int32438 MB
float32438 MB
float64876 MB

Use float32 unless you need the extra precision. When you only need raw ADC counts, stay in int16.

import numpy as np

# Transfer as int16, convert to float32 (not float64)
raw = scope.query_binary_values("CURVE?", datatype="h")
voltages = np.array(raw, dtype=np.float32) * y_scale + y_offset

# Don't do this (wastes 2x memory)
# voltages = np.array(raw, dtype=np.float64) * y_scale + y_offset

Bounded Buffer for Continuous Acquisition

When logging data indefinitely, use a fixed-size buffer to cap memory usage. Old data rolls off as new data arrives.

import numpy as np
from collections import deque
import gc

class BoundedBuffer:
    """Fixed-size buffer that evicts old data when full."""

    def __init__(self, max_items=1000, dtype=np.float32):
        self.buffer = deque(maxlen=max_items)
        self.dtype = dtype
        self.total_bytes = 0

    def add(self, data):
        arr = np.array(data, dtype=self.dtype)
        if len(self.buffer) == self.buffer.maxlen:
            evicted = self.buffer[0]
            self.total_bytes -= evicted.nbytes
        self.buffer.append(arr)
        self.total_bytes += arr.nbytes

    def recent(self, n=None):
        items = list(self.buffer)
        return items[-n:] if n else items

    def clear(self):
        self.buffer.clear()
        self.total_bytes = 0
        gc.collect()

# Usage
buf = BoundedBuffer(max_items=500)

for i in range(10000):
    waveform = scope.query_binary_values("CURVE?", datatype="h")
    buf.add(waveform)

    if i % 100 == 0:
        print(f"Buffer: {len(buf.buffer)} items, {buf.total_bytes / 1e6:.1f} MB")

Memory-Mapped Storage

For datasets larger than RAM, write directly to disk with np.memmap. The OS pages data in and out as needed.

import numpy as np
import tempfile
import os

# Create a 4 GB memory-mapped file (1 billion float32 samples)
tmp = tempfile.NamedTemporaryFile(delete=False, suffix=".dat")
data = np.memmap(tmp.name, dtype=np.float32, mode="w+", shape=(1_000_000_000,))

write_idx = 0

for i in range(10000):
    waveform = scope.query_binary_values("CURVE?", datatype="h")
    n = len(waveform)

    if write_idx + n > len(data):
        write_idx = 0  # Wrap around (circular)

    data[write_idx:write_idx + n] = np.array(waveform, dtype=np.float32)
    write_idx += n
    data.flush()

# Read back the last 1M samples
recent = data[max(0, write_idx - 1_000_000):write_idx]

# Cleanup
del data
os.unlink(tmp.name)

Periodic Garbage Collection

Python's GC usually handles things, but in tight measurement loops with large temporary arrays, forcing a collect every N iterations prevents memory creep.

import gc
import numpy as np

gc_interval = 100

for i in range(10000):
    raw = scope.query_binary_values("CURVE?", datatype="h")
    spectrum = np.abs(np.fft.fft(raw)) ** 2
    result = np.mean(spectrum)

    # Delete large temporaries explicitly
    del raw, spectrum

    if i % gc_interval == 0:
        gc.collect()

Detect Memory Leaks with tracemalloc

Python's built-in tracemalloc shows you exactly where memory is growing.

import tracemalloc

tracemalloc.start()
snap1 = tracemalloc.take_snapshot()

# Run your measurement loop
data_log = []
for i in range(1000):
    waveform = scope.query_binary_values("CURVE?", datatype="h")
    processed = np.array(waveform, dtype=np.float64)  # Expensive type
    data_log.append(processed[:1000])  # Partial retention

snap2 = tracemalloc.take_snapshot()

# Show the top 10 memory-growth locations
for stat in snap2.compare_to(snap1, "lineno")[:10]:
    print(stat)

Common Pitfalls

PitfallFix
Never closing instrumentsUse with statements or try/finally
Appending to a list foreverUse deque(maxlen=N) or write to disk
Using float64 by defaultUse float32 or int16 when possible
Not deleting large temporariesdel array then gc.collect() in tight loops
Opening a new ResourceManager per callCreate one at the top of your script, reuse it