Added disks extended attributes

This patch brings some of the physical and virtual drive attributes as
`custom_fields` to the disks inventory.

The goal is to have this information present to ease disks maintenance
when a drive becomes unavailable and its attributes can't be read anymore
from the RAID controller.

It also helps to standardize the extended disk attributes across the
different manufacturers.

As the disk physical identifers were not available under the correct
format (hexadecimal format using the `xml` output as opposed as `X:Y:Z` format
using the default `list` format), the command line parser has been
refactored to read the `list` format, rather than `xml` one in the
`omreport` raid controller parser.

As the custom fields have to be created prior being able to register
the disks extended attributes, this feature is only activated using the
`--process-virtual-drives` command line parameter, or by setting
`process_virtual_drives` to `true` in the configuration file.

The custom fields to create as `DCIM > inventory item` `Text` are described
below.

    NAME            LABEL                      DESCRIPTION
    mount_point     Mount point                Device mount point(s)
    pd_identifier   Physical disk identifier   Physical disk identifier in the RAID controller
    vd_array        Virtual drive array        Virtual drive array the disk is member of
    vd_consistency  Virtual drive consistency  Virtual disk array consistency
    vd_device       Virtual drive device       Virtual drive system device
    vd_raid_type    Virtual drive RAID         Virtual drive array RAID type
    vd_size         Virtual drive size         Virtual drive array size

In the current implementation, the disks attributes ore not updated: if
a disk with the correct serial number is found, it's sufficient to
consider it as up to date.

To force the reprocessing of the disks extended attributes, the
`--force-disk-refresh` command line option can be used: it removes all
existing disks to before populating them with the correct parsing.
Unless this option is specified, the extended attributes won't be
modified unless a disk is replaced.

It is possible to dump the physical/virtual disks map on the filesystem under
the JSON notation to ease or automate disks management. The file path has to
be provided using the `--dump-disks-map` command line parameter.
This commit is contained in:
Christophe Simon 2022-02-25 18:43:09 +01:00
parent af9df9ab4b
commit e789619b34
12 changed files with 505 additions and 223 deletions

View file

@ -92,10 +92,12 @@ netbox:
token: supersecrettoken
# uncomment to disable ssl verification
# ssl_verify: false
# uncomment to use the system's CA certificates
# ssl_ca_certs_file: /etc/ssl/certs/ca-certificates.crt
# Network configuration
network:
# Regex to ignore interfaces
# Regex to ignore interfaces
ignore_interfaces: "(dummy.*|docker.*)"
# Regex to ignore IP addresses
ignore_ips: (127\.0\.0\..*)
@ -119,7 +121,7 @@ network:
# driver: "file:/tmp/tenant"
# regex: "(.*)"
## Enable virtual machine support
## Enable virtual machine support
# virtual:
# # not mandatory, can be guessed
# enabled: True
@ -145,7 +147,7 @@ rack_location:
# driver: "file:/tmp/datacenter"
# regex: "(.*)"
# Enable local inventory reporting
# Enable local inventory reporting
inventory: true
```
@ -160,6 +162,36 @@ The `get_blade_slot` method return the name of the `Device Bay`.
Certain vendors don't report the blade slot in `dmidecode`, so we can use the `slot_location` regex feature of the configuration file.
Some blade servers can be equipped with additional hardware using expansion blades, next to the processing blade, such as GPU expansion, or drives bay expansion. By default, the hardware from the expnasion is associated with the blade server itself, but it's possible to register the expansion as its own device using the `--expansion-as-device` command line parameter, or by setting `expansion_as_device` to `true` in the configuration file.
## Drives attributes processing
It is possible to process drives extended attributes such as the drive's physical or logical identifier, logical drive RAID type, size, consistency and so on.
Those attributes as set as `custom_fields` in Netbox, and need to be registered properly before being able to specify them during the inventory phase.
As the custom fields have to be created prior being able to register the disks extended attributes, this feature is only activated using the `--process-virtual-drives` command line parameter, or by setting `process_virtual_drives` to `true` in the configuration file.
The custom fields to create as `DCIM > inventory item` `Text` are described below.
```
NAME LABEL DESCRIPTION
mount_point Mount point Device mount point(s)
pd_identifier Physical disk identifier Physical disk identifier in the RAID controller
vd_array Virtual drive array Virtual drive array the disk is member of
vd_consistency Virtual drive consistency Virtual disk array consistency
vd_device Virtual drive device Virtual drive system device
vd_raid_type Virtual drive RAID Virtual drive array RAID type
vd_size Virtual drive size Virtual drive array size
```
In the current implementation, the disks attributes ore not updated: if a disk with the correct serial number is found, it's sufficient to consider it as up to date.
To force the reprocessing of the disks extended attributes, the `--force-disk-refresh` command line option can be used: it removes all existing disks to before populating them with the correct parsing. Unless this option is specified, the extended attributes won't be modified unless a disk is replaced.
It is possible to dump the physical/virtual disks map on the filesystem under the JSON notation to ease or automate disks management. The file path has to be provided using the `--dump-disks-map` command line parameter.
## Anycast IP
The default behavior of the agent is to assign an interface to an IP.
@ -256,5 +288,5 @@ On a personal note, I use the docker image from [netbox-community/netbox-docker]
# git clone https://github.com/netbox-community/netbox-docker
# cd netbox-docker
# docker-compose pull
# docker-compose up
# docker-compose up
```

View file

@ -78,6 +78,13 @@ def get_config():
p.add_argument('--network.lldp', help='Enable auto-cabling feature through LLDP infos')
p.add_argument('--inventory', action='store_true',
help='Enable HW inventory (CPU, Memory, RAID Cards, Disks) feature')
p.add_argument('--process-virtual-drives', action='store_true',
help='Process virtual drives information from RAID '
'controllers to fill disk custom_fields')
p.add_argument('--force-disk-refresh', action='store_true',
help='Forces disks detection reprocessing')
p.add_argument('--dump-disks-map',
help='File path to dump physical/virtual disks map')
options = p.parse_args()
return options

View file

@ -1,8 +1,3 @@
import logging
import re
import pynetbox
from netbox_agent.config import config
from netbox_agent.config import netbox_instance as nb
from netbox_agent.lshw import LSHW
@ -10,6 +5,12 @@ from netbox_agent.misc import get_vendor, is_tool
from netbox_agent.raid.hp import HPRaid
from netbox_agent.raid.omreport import OmreportRaid
from netbox_agent.raid.storcli import StorcliRaid
import traceback
import pynetbox
import logging
import json
import re
INVENTORY_TAG = {
'cpu': {'name': 'hw:cpu', 'slug': 'hw-cpu'},
@ -226,7 +227,7 @@ class Inventory():
def get_raid_cards(self, filter_cards=False):
raid_class = None
if self.server.manufacturer == 'Dell':
if self.server.manufacturer in ('Dell', 'Huawei'):
if is_tool('omreport'):
raid_class = OmreportRaid
if is_tool('storcli'):
@ -302,53 +303,60 @@ class Inventory():
if raid_card.get_serial_number() not in [x.serial for x in nb_raid_cards]:
self.create_netbox_raid_card(raid_card)
def is_virtual_disk(self, disk):
def is_virtual_disk(self, disk, raid_devices):
disk_type = disk.get('type')
logicalname = disk.get('logicalname')
description = disk.get('description')
size = disk.get('size')
product = disk.get('product')
if logicalname in raid_devices or disk_type is None:
return True
non_raid_disks = [
'MR9361-8i',
]
if size is None and logicalname is None or \
'virtual' in product.lower() or 'logical' in product.lower() or \
if logicalname in raid_devices or \
disk_type is None or \
product in non_raid_disks or \
'virtual' in product.lower() or \
'logical' in product.lower() or \
'volume' in description.lower() or \
description == 'SCSI Enclosure' or \
'volume' in description.lower():
(size is None and logicalname is None):
return True
return False
def get_hw_disks(self):
disks = []
for raid_card in self.get_raid_cards(filter_cards=True):
disks.extend(raid_card.get_physical_disks())
raid_devices = [
d.get('custom_fields', {}).get('vd_device')
for d in disks
if d.get('custom_fields', {}).get('vd_device')
]
for disk in self.lshw.get_hw_linux("storage"):
if self.is_virtual_disk(disk):
if self.is_virtual_disk(disk, raid_devices):
continue
logicalname = disk.get('logicalname')
description = disk.get('description')
size = disk.get('size', 0)
product = disk.get('product')
serial = disk.get('serial')
d = {}
d["name"] = ""
d['Size'] = '{} GB'.format(int(size / 1024 / 1024 / 1024))
d['logicalname'] = logicalname
d['description'] = description
d['SN'] = serial
d['Model'] = product
size =int(disk.get('size', 0)) / 1073741824
d = {
"name": "",
'Size': '{} GB'.format(size),
'logicalname': disk.get('logicalname'),
'description': disk.get('description'),
'SN': disk.get('serial'),
'Model': disk.get('product'),
'Type': disk.get('type'),
}
if disk.get('vendor'):
d['Vendor'] = disk['vendor']
else:
d['Vendor'] = get_vendor(disk['product'])
disks.append(d)
for raid_card in self.get_raid_cards(filter_cards=True):
disks += raid_card.get_physical_disks()
# remove duplicate serials
seen = set()
uniq = [x for x in disks if x['SN'] not in seen and not seen.add(x['SN'])]
@ -361,53 +369,79 @@ class Inventory():
logicalname = disk.get('logicalname')
desc = disk.get('description')
# nonraid disk
if logicalname and desc:
if type(logicalname) is list:
logicalname = logicalname[0]
name = '{} - {} ({})'.format(
desc,
logicalname,
disk.get('Size', 0))
description = 'Device {}'.format(disk.get('logicalname', 'Unknown'))
else:
name = '{} ({})'.format(disk['Model'], disk['Size'])
description = '{}'.format(disk['Type'])
name = '{} ({})'.format(disk['Model'], disk['Size'])
description = disk['Type']
_ = nb.dcim.inventory_items.create(
device=self.device_id,
discovered=True,
tags=[{'name': INVENTORY_TAG['disk']['name']}],
name=name,
serial=disk['SN'],
part_id=disk['Model'],
description=description,
manufacturer=manufacturer.id if manufacturer else None
)
parms = {
'device': self.device_id,
'discovered': True,
'tags': [{'name': INVENTORY_TAG['disk']['name']}],
'name': name,
'serial': disk['SN'],
'part_id': disk['Model'],
'description': description,
'manufacturer': getattr(manufacturer, "id", None),
}
if config.process_virtual_drives:
parms['custom_fields'] = disk.get("custom_fields", {})
_ = nb.dcim.inventory_items.create(**parms)
logging.info('Creating Disk {model} {serial}'.format(
model=disk['Model'],
serial=disk['SN'],
))
def dump_disks_map(self, disks):
disk_map = [d['custom_fields'] for d in disks if 'custom_fields' in d]
if config.dump_disks_map == "-":
f = sys.stdout
else:
f = open(config.dump_disks_map, "w")
f.write(
json.dumps(
disk_map,
separators=(',', ':'),
indent=4,
sort_keys=True
)
)
if config.dump_disks_map != "-":
f.close()
def do_netbox_disks(self):
nb_disks = self.get_netbox_inventory(
device_id=self.device_id,
tag=INVENTORY_TAG['disk']['slug'])
tag=INVENTORY_TAG['disk']['slug']
)
disks = self.get_hw_disks()
if config.dump_disks_map:
try:
self.dump_disks_map(disks)
except Exception as e:
logging.error("Failed to dump disks map: {}".format(e))
logging.debug(traceback.format_exc())
disk_serials = [d['SN'] for d in disks if 'SN' in d]
# delete disks that are in netbox but not locally
# use the serial_number has the comparison element
for nb_disk in nb_disks:
if nb_disk.serial not in [x['SN'] for x in disks if x.get('SN')]:
if nb_disk.serial not in disk_serials or \
config.force_disk_refresh:
logging.info('Deleting unknown locally Disk {serial}'.format(
serial=nb_disk.serial,
))
nb_disk.delete()
if config.force_disk_refresh:
nb_disks = self.get_netbox_inventory(
device_id=self.device_id,
tag=INVENTORY_TAG['disk']['slug']
)
# create disks that are not in netbox
for disk in disks:
if disk.get('SN') not in [x.serial for x in nb_disks]:
if disk.get('SN') not in [d.serial for d in nb_disks]:
self.create_netbox_disk(disk)
def create_netbox_memory(self, memory):

View file

@ -1,9 +1,8 @@
import json
import logging
import subprocess
import sys
from netbox_agent.misc import is_tool
import subprocess
import logging
import json
import sys
class LSHW():
@ -15,7 +14,13 @@ class LSHW():
data = subprocess.getoutput(
'lshw -quiet -json'
)
self.hw_info = json.loads(data)
json_data = json.loads(data)
# Starting from version 02.18, `lshw -json` wraps its result in a list
# rather than returning directly a dictionary
if isinstance(json_data, list):
self.hw_info = json_data[0]
else:
self.hw_info = json_data
self.info = {}
self.memories = []
self.interfaces = []
@ -77,42 +82,41 @@ class LSHW():
def find_storage(self, obj):
if "children" in obj:
for device in obj["children"]:
d = {}
d["logicalname"] = device.get("logicalname")
d["product"] = device.get("product")
d["serial"] = device.get("serial")
d["version"] = device.get("version")
d["size"] = device.get("size")
d["description"] = device.get("description")
self.disks.append(d)
self.disks.append({
"logicalname": device.get("logicalname"),
"product": device.get("product"),
"serial": device.get("serial"),
"version": device.get("version"),
"size": device.get("size"),
"description": device.get("description"),
"type": device.get("description"),
})
elif "nvme" in obj["configuration"]["driver"]:
if not is_tool('nvme'):
logging.error('nvme-cli >= 1.0 does not seem to be installed')
else:
try:
nvme = json.loads(
subprocess.check_output(
["nvme", '-list', '-o', 'json'],
encoding='utf8')
)
for device in nvme["Devices"]:
d = {}
d['logicalname'] = device["DevicePath"]
d['product'] = device["ModelNumber"]
d['serial'] = device["SerialNumber"]
d["version"] = device["Firmware"]
if "UsedSize" in device:
d['size'] = device["UsedSize"]
if "UsedBytes" in device:
d['size'] = device["UsedBytes"]
d['description'] = "NVME Disk"
self.disks.append(d)
except Exception:
pass
return
try:
nvme = json.loads(
subprocess.check_output(
["nvme", '-list', '-o', 'json'],
encoding='utf8')
)
for device in nvme["Devices"]:
d = {
'logicalname': device["DevicePath"],
'product': device["ModelNumber"],
'serial': device["SerialNumber"],
"version": device["Firmware"],
'description': "NVME",
'type': "NVME",
}
if "UsedSize" in device:
d['size'] = device["UsedSize"]
if "UsedBytes" in device:
d['size'] = device["UsedBytes"]
self.disks.append(d)
except Exception:
pass
def find_cpus(self, obj):
if "product" in obj:

View file

@ -1,10 +1,9 @@
import socket
import subprocess
from shutil import which
from slugify import slugify
from netbox_agent.config import netbox_instance as nb
from slugify import slugify
from shutil import which
import subprocess
import socket
import re
def is_tool(name):
@ -74,3 +73,19 @@ def create_netbox_tags(tags):
)
ret.append(nb_tag)
return ret
def get_mount_points():
mount_points = {}
output = subprocess.getoutput('mount')
for r in output.split("\n"):
if not r.startswith("/dev/"):
continue
mount_info = r.split()
device = mount_info[0]
device = re.sub(r'\d+$', '', device)
mp = mount_info[2]
mount_points.setdefault(device, []).append(mp)
return mount_points

View file

@ -325,7 +325,7 @@ class Network(object):
netbox_ips = nb.ipam.ip_addresses.filter(
address=ip,
)
if not len(netbox_ips):
if not netbox_ips:
logging.info('Create new IP {ip} on {interface}'.format(
ip=ip, interface=interface))
query_params = {
@ -340,7 +340,7 @@ class Network(object):
)
return netbox_ip
netbox_ip = next(netbox_ips)
netbox_ip = list(netbox_ips)[0]
# If IP exists in anycast
if netbox_ip.role and netbox_ip.role.label == 'Anycast':
logging.debug('IP {} is Anycast..'.format(ip))

View file

@ -115,7 +115,7 @@ class PowerSupply():
voltage = [p['voltage'] for p in pwr_feeds]
else:
logging.info('Could not find power feeds for Rack, defaulting value to 230')
voltage = [230 for _ in nb_psu]
voltage = [230 for _ in nb_psus]
for i, nb_psu in enumerate(nb_psus):
nb_psu.allocated_draw = int(float(psu_cons[i]) * voltage[i])

View file

@ -1,13 +1,20 @@
import re
import subprocess
from netbox_agent.config import config
from netbox_agent.misc import get_vendor
from netbox_agent.raid.base import Raid, RaidController
from netbox_agent.misc import get_vendor
from netbox_agent.config import config
import subprocess
import logging
import re
REGEXP_CONTROLLER_HP = re.compile(r'Smart Array ([a-zA-Z0-9- ]+) in Slot ([0-9]+)')
def ssacli(command):
output = subprocess.getoutput('ssacli {}'.format(command) )
lines = output.split('\n')
lines = list(filter(None, lines))
return lines
def _parse_ctrl_output(lines):
controllers = {}
current_ctrl = None
@ -18,11 +25,11 @@ def _parse_ctrl_output(lines):
ctrl = REGEXP_CONTROLLER_HP.search(line)
if ctrl is not None:
current_ctrl = ctrl.group(1)
controllers[current_ctrl] = {"Slot": ctrl.group(2)}
if "Embedded" not in line:
controllers[current_ctrl]["External"] = True
controllers[current_ctrl] = {'Slot': ctrl.group(2)}
if 'Embedded' not in line:
controllers[current_ctrl]['External'] = True
continue
attr, val = line.split(": ", 1)
attr, val = line.split(': ', 1)
attr = attr.strip()
val = val.strip()
controllers[current_ctrl][attr] = val
@ -39,27 +46,54 @@ def _parse_pd_output(lines):
if not line or line.startswith('Note:'):
continue
# Parses the Array the drives are in
if line.startswith("Array"):
if line.startswith('Array'):
current_array = line.split(None, 1)[1]
# Detects new physical drive
if line.startswith("physicaldrive"):
if line.startswith('physicaldrive'):
current_drv = line.split(None, 1)[1]
drives[current_drv] = {}
if current_array is not None:
drives[current_drv]["Array"] = current_array
drives[current_drv]['Array'] = current_array
continue
if ": " not in line:
if ': ' not in line:
continue
attr, val = line.split(": ", 1)
attr, val = line.split(': ', 1)
drives.setdefault(current_drv, {})[attr] = val
return drives
def _parse_ld_output(lines):
drives = {}
current_array = None
current_drv = None
for line in lines:
line = line.strip()
if not line or line.startswith('Note:'):
continue
# Parses the Array the drives are in
if line.startswith('Array'):
current_array = line.split(None, 1)[1]
drives[current_array] = {}
# Detects new physical drive
if line.startswith('Logical Drive'):
current_drv = line.split(': ', 1)[1]
drives.setdefault(current_array, {})['LogicalDrive'] = current_drv
continue
if ': ' not in line:
continue
attr, val = line.split(': ', 1)
drives.setdefault(current_array, {})[attr] = val
return drives
class HPRaidController(RaidController):
def __init__(self, controller_name, data):
self.controller_name = controller_name
self.data = data
self.drives = self._get_physical_disks()
self.pdrives = self._get_physical_disks()
self.ldrives = self._get_logical_drives()
self._get_virtual_drives_map()
def get_product_name(self):
return self.controller_name
@ -77,15 +111,12 @@ class HPRaidController(RaidController):
return self.data.get('External', False)
def _get_physical_disks(self):
output = subprocess.getoutput(
'ssacli ctrl slot={slot} pd all show detail'.format(slot=self.data['Slot'])
)
lines = output.split('\n')
lines = list(filter(None, lines))
drives = _parse_pd_output(lines)
ret = []
lines = ssacli('ctrl slot={} pd all show detail'.format(self.data['Slot']))
pdrives = _parse_pd_output(lines)
ret = {}
for name, attrs in drives.items():
for name, attrs in pdrives.items():
array = attrs.get('Array', '')
model = attrs.get('Model', '').strip()
vendor = None
if model.startswith('HP'):
@ -95,7 +126,8 @@ class HPRaidController(RaidController):
else:
vendor = get_vendor(model)
ret.append({
ret[name] = {
'Array': array,
'Model': model,
'Vendor': vendor,
'SN': attrs.get('Serial Number', '').strip(),
@ -103,11 +135,40 @@ class HPRaidController(RaidController):
'Type': 'SSD' if attrs.get('Interface Type') == 'Solid State SATA'
else 'HDD',
'_src': self.__class__.__name__,
})
}
return ret
def _get_logical_drives(self):
lines = ssacli('ctrl slot={} ld all show detail'.format(self.data['Slot']))
ldrives = _parse_ld_output(lines)
ret = {}
for array, attrs in ldrives.items():
ret[array] = {
'vd_array': array,
'vd_size': attrs['Size'],
'vd_consistency': attrs['Status'],
'vd_raid_type': 'RAID {}'.format(attrs['Fault Tolerance']),
'vd_device': attrs['LogicalDrive'],
'mount_point': attrs['Mount Points']
}
return ret
def _get_virtual_drives_map(self):
for name, attrs in self.pdrives.items():
array = attrs["Array"]
ld = self.ldrives.get(array)
if ld is None:
logging.error(
"Failed to find array information for physical drive {}."
" Ignoring.".format(name)
)
continue
attrs['custom_fields'] = ld
attrs['custom_fields']['pd_identifier'] = name
def get_physical_disks(self):
return self.drives
return list(self.pdrives.values())
class HPRaid(Raid):

View file

@ -1,25 +1,32 @@
import re
import subprocess
import xml.etree.ElementTree as ET # NOQA
from netbox_agent.misc import get_vendor
from netbox_agent.raid.base import Raid, RaidController
# Inspiration from https://github.com/asciiphil/perc-status/blob/master/perc-status
from netbox_agent.misc import get_vendor, get_mount_points
from netbox_agent.config import config
import subprocess
import logging
import re
def get_field(obj, fieldname):
f = obj.find(fieldname)
if f is None:
return None
if f.attrib['type'] in ['u32', 'u64']:
if re.search('Mask$', fieldname):
return int(f.text, 2)
else:
return int(f.text)
if f.attrib['type'] == 'astring':
return f.text
return f.text
def omreport(sub_command):
command = 'omreport {}'.format(sub_command)
output = subprocess.getoutput(command)
res = {}
section_re = re.compile('^[A-Z]')
current_section = None
current_obj = None
for line in output.split('\n'):
if ': ' in line:
attr, value = line.split(': ', 1)
attr = attr.strip()
value = value.strip()
if attr == 'ID':
obj = {}
res.setdefault(current_section, []).append(obj)
current_obj = obj
current_obj[attr] = value
elif section_re.search(line) is not None:
current_section = line.strip()
return res
class OmreportController(RaidController):
@ -28,49 +35,88 @@ class OmreportController(RaidController):
self.controller_index = controller_index
def get_product_name(self):
return get_field(self.data, 'Name')
return self.data['Name']
def get_manufacturer(self):
return None
def get_serial_number(self):
return get_field(self.data, 'DeviceSerialNumber')
return self.data.get('DeviceSerialNumber')
def get_firmware_version(self):
return get_field(self.data, 'Firmware Version')
return self.data.get('Firmware Version')
def _get_physical_disks(self):
pds = {}
res = omreport('storage pdisk controller={}'.format(
self.controller_index
))
for pdisk in [d for d in list(res.values())[0]]:
disk_id = pdisk['ID']
size = re.sub('B .*$', 'B', pdisk['Capacity'])
pds[disk_id] = {
'Vendor': get_vendor(pdisk['Vendor ID']),
'Model': pdisk['Product ID'],
'SN': pdisk['Serial No.'],
'Size': size,
'Type': pdisk['Media'],
'_src': self.__class__.__name__,
}
return pds
def _get_virtual_drives_map(self):
pds = {}
res = omreport('storage vdisk controller={}'.format(
self.controller_index
))
for vdisk in [d for d in list(res.values())[0]]:
vdisk_id = vdisk['ID']
device = vdisk['Device Name']
mount_points = get_mount_points()
mp = mount_points.get(device, 'n/a')
size = re.sub('B .*$', 'B', vdisk['Size'])
vd = {
'vd_array': vdisk_id,
'vd_size': size,
'vd_consistency': vdisk['State'],
'vd_raid_type': vdisk['Layout'],
'vd_device': vdisk['Device Name'],
'mount_point': ', '.join(sorted(mp)),
}
drives_res = omreport(
'storage pdisk controller={} vdisk={}'.format(
self.controller_index, vdisk_id
))
for pdisk in [d for d in list(drives_res.values())[0]]:
pds[pdisk['ID']] = vd
return pds
def get_physical_disks(self):
ret = []
output = subprocess.getoutput(
'omreport storage controller controller={} -fmt xml'.format(self.controller_index)
)
root = ET.fromstring(output)
et_array_disks = root.find('ArrayDisks')
if et_array_disks is not None:
for obj in et_array_disks.findall('DCStorageObject'):
ret.append({
'Vendor': get_vendor(get_field(obj, 'Vendor')),
'Model': get_field(obj, 'ProductID'),
'SN': get_field(obj, 'DeviceSerialNumber'),
'Size': '{:.0f}GB'.format(
int(get_field(obj, 'Length')) / 1024 / 1024 / 1024
),
'Type': 'HDD' if int(get_field(obj, 'MediaType')) == 1 else 'SSD',
'_src': self.__class__.__name__,
})
return ret
pds = self._get_physical_disks()
vds = self._get_virtual_drives_map()
for pd_identifier, vd in vds.items():
if pd_identifier not in pds:
logging.error(
'Physical drive {} listed in virtual drive {} not '
'found in drives list'.format(
pd_identifier, vd['vd_array']
)
)
continue
pds[pd_identifier].setdefault('custom_fields', {}).update(vd)
pds[pd_identifier]['custom_fields']['pd_identifier'] = pd_identifier
return list(pds.values())
class OmreportRaid(Raid):
def __init__(self):
output = subprocess.getoutput('omreport storage controller -fmt xml')
controller_xml = ET.fromstring(output)
self.controllers = []
res = omreport('storage controller')
for obj in controller_xml.find('Controllers').findall('DCStorageObject'):
ctrl_index = get_field(obj, 'ControllerNum')
for controller in res['Controller']:
ctrl_index = controller['ID']
self.controllers.append(
OmreportController(ctrl_index, obj)
OmreportController(ctrl_index, controller)
)
def get_controllers(self):

View file

@ -1,8 +1,31 @@
import json
import subprocess
from netbox_agent.misc import get_vendor
from netbox_agent.raid.base import Raid, RaidController
from netbox_agent.misc import get_vendor, get_mount_points
from netbox_agent.config import config
import subprocess
import logging
import json
import re
import os
def storecli(sub_command):
command = 'storcli {} J'.format(sub_command)
output = subprocess.getoutput(command)
data = json.loads(output)
controllers = dict([
(
c['Command Status']['Controller'],
c['Response Data']
) for c in data['Controllers']
if c['Command Status']['Status'] == 'Success'
])
if not controllers:
logging.error(
"Failed to execute command '{}'. "
"Ignoring data.".format(command)
)
return {}
return controllers
class StorcliController(RaidController):
@ -22,52 +45,101 @@ class StorcliController(RaidController):
def get_firmware_version(self):
return self.data['FW Package Build']
def get_physical_disks(self):
ret = []
output = subprocess.getoutput(
'storcli /c{}/eall/sall show all J'.format(self.controller_index)
)
drive_infos = json.loads(output)['Controllers'][self.controller_index]['Response Data']
def _get_physical_disks(self):
pds = {}
cmd = '/c{}/eall/sall show all'.format(self.controller_index)
controllers = storecli(cmd)
pd_info = controllers[self.controller_index]
pd_re = re.compile(r'^Drive (/c\d+/e\d+/s\d+)$')
for physical_drive in self.data['PD LIST']:
enclosure = physical_drive.get('EID:Slt').split(':')[0]
slot = physical_drive.get('EID:Slt').split(':')[1]
size = physical_drive.get('Size').strip()
media_type = physical_drive.get('Med').strip()
drive_identifier = 'Drive /c{}/e{}/s{}'.format(
str(self.controller_index), str(enclosure), str(slot)
)
drive_attr = drive_infos['{} - Detailed Information'.format(drive_identifier)][
'{} Device attributes'.format(drive_identifier)]
model = drive_attr.get('Model Number', '').strip()
ret.append({
for section, attrs in pd_info.items():
reg = pd_re.search(section)
if reg is None:
continue
pd_name = reg.group(1)
pd_attr = attrs[0]
pd_identifier = pd_attr['EID:Slt']
size = pd_attr.get('Size', '').strip()
media_type = pd_attr.get('Med', '').strip()
pd_details = pd_info['{} - Detailed Information'.format(section)]
pd_dev_attr = pd_details['{} Device attributes'.format(section)]
model = pd_dev_attr.get('Model Number', '').strip()
pd = {
'Model': model,
'Vendor': get_vendor(model),
'SN': drive_attr.get('SN', '').strip(),
'SN': pd_dev_attr.get('SN', '').strip(),
'Size': size,
'Type': media_type,
'_src': self.__class__.__name__,
})
return ret
}
if config.process_virtual_drives:
pd.setdefault('custom_fields', {})['pd_identifier'] = pd_name
pds[pd_identifier] = pd
return pds
def _get_virtual_drives_map(self):
vds = {}
cmd = '/c{}/vall show all'.format(self.controller_index)
controllers = storecli(cmd)
vd_info = controllers[self.controller_index]
mount_points = get_mount_points()
for vd_identifier, vd_attrs in vd_info.items():
if not vd_identifier.startswith("/c{}/v".format(self.controller_index)):
continue
volume = vd_identifier.split("/")[-1].lstrip("v")
vd_attr = vd_attrs[0]
vd_pd_identifier = 'PDs for VD {}'.format(volume)
vd_pds = vd_info[vd_pd_identifier]
vd_prop_identifier = 'VD{} Properties'.format(volume)
vd_properties = vd_info[vd_prop_identifier]
for pd in vd_pds:
pd_identifier = pd["EID:Slt"]
wwn = vd_properties["SCSI NAA Id"]
wwn_path = "/dev/disk/by-id/wwn-0x{}".format(wwn)
device = os.path.realpath(wwn_path)
mp = mount_points.get(device, "n/a")
vds[pd_identifier] = {
"vd_array": vd_identifier,
"vd_size": vd_attr["Size"],
"vd_consistency": vd_attr["Consist"],
"vd_raid_type": vd_attr["TYPE"],
"vd_device": device,
"mount_point": ", ".join(sorted(mp))
}
return vds
def get_physical_disks(self):
# Parses physical disks information
pds = self._get_physical_disks()
# Parses virtual drives information and maps them to physical disks
vds = self._get_virtual_drives_map()
for pd_identifier, vd in vds.items():
if pd_identifier not in pds:
logging.error(
"Physical drive {} listed in virtual drive {} not "
"found in drives list".format(
pd_identifier, vd["vd_array"]
)
)
continue
pds[pd_identifier].setdefault("custom_fields", {}).update(vd)
return list(pds.values())
class StorcliRaid(Raid):
def __init__(self):
self.output = subprocess.getoutput('storcli /call show J')
self.data = json.loads(self.output)
self.controllers = []
if len([
x for x in self.data['Controllers']
if x['Command Status']['Status'] == 'Success'
]) > 0:
for controller in self.data['Controllers']:
self.controllers.append(
StorcliController(
controller['Command Status']['Controller'],
controller['Response Data']
)
controllers = storecli('/call show')
for controller_id, controller_data in controllers.items():
self.controllers.append(
StorcliController(
controller_id,
controller_data
)
)
def get_controllers(self):
return self.controllers

View file

@ -1,9 +1,3 @@
import logging
import socket
import subprocess
import sys
from pprint import pprint
import netbox_agent.dmidecode as dmidecode
from netbox_agent.config import config
from netbox_agent.config import netbox_instance as nb
@ -12,6 +6,11 @@ from netbox_agent.location import Datacenter, Rack, Tenant
from netbox_agent.misc import create_netbox_tags, get_device_role, get_device_type
from netbox_agent.network import ServerNetwork
from netbox_agent.power import PowerSupply
from pprint import pprint
import subprocess
import logging
import socket
import sys
class ServerBase():
@ -493,3 +492,15 @@ class ServerBase():
Indicates if the device hosts an expansion card
"""
return False
def own_gpu_expansion_slot(self):
"""
Indicates if the device hosts a GPU expansion card
"""
return False
def own_drive_expansion_slot(self):
"""
Indicates if the device hosts a drive expansion bay
"""
return False

View file

@ -8,7 +8,7 @@ class GenericHost(ServerBase):
self.manufacturer = dmidecode.get_by_type(self.dmi, 'Baseboard')[0].get('Manufacturer')
def is_blade(self):
return None
return False
def get_blade_slot(self):
return None