- Article
- 24 minutes to read
This document describes Linux Diagnostics Extension (LAD) version 3.0 and later.
Important
Version 2.3 and earlier information is availableI monitor the performance and diagnostic data of a Linux virtual machine.
Introduction
The Linux diagnostic extension helps the user to monitor the status of a Linux virtual machine running on Microsoft Azure. It has the following characteristics:
- Collects virtual machine system performance metrics and stores them in a specific table in a specific storage account.
- Retrieves syslog log events and stores them in a specific table in the designated storage account.
- Allows users to customize the data metrics that are collected and uploaded.
- Allows users to customize syslog facilities and severity levels of events collected and uploaded.
- Allows users to upload specific log files to a specific storage table.
- Supports sending log events and metrics to arbitrary Azure Event Hub endpoints and JSON-formatted blobs in designated storage account.
This extension works with both Azure deployment models.
Install the extension in a virtual machine
You can enable the extension using Azure PowerShell cmdlets, Azure CLI scripts, Azure Resource Monitor templates (ARM templates), or the Azure portal. More information can be found atAdditional functions.
Observation
Some components of the LAD VM extension are also provided inThe VM Log Analytics extension. Due to this architecture, conflicts can arise if both extensions are instantiated on the same ARM model.
Avoid conflicts at the time of installation,depends onuse the directive to ensure extensions are installed sequentially. The extensions can be installed in any order.
These installation instructions and aconfiguration example to downloadto configure LAD 3.0 to:
- Collect and store the same measurements as in LAD 2.3.
- Collect a useful set of file system metrics. This feature is new in LAD 3.0.
- Collect the default syslog collection that LAD 2.3 enabled.
- Enable the Azure portal for virtual machine metric charts and alerts.
The downloadable configuration is just an example. Modify it to suit your needs.
Supported Linux distributions
LAD supports the following distributions and versions. The list of distributions and versions only applies to Azure-approved Linux provider images. The extension does not typically support BYOL and BYOS images from third parties, such as devices.
A distribution that only lists major versions, such as Debian 7, also supports all minor versions. If a partial version is specified, only that version is supported. If a plus sign (+) is added, partial versions equal to or later than the specified version are supported.
Supported distributions and versions:
- Ubuntu 20.04, 18.04, 16.04, 14.04
- CentOS 7, 6.5+
- OracleLinux 7, 6.4+
- OpenSUSE 13.1+
- SUSE Linux Enterprise Server 12
- Debian 9, 8, 7
- Red Hat Enterprise Linux (RHEL) 7, 6.7+
Conditions
- Azure Linux Agent version 2.2.0 or later. Most Azure VM Linux gallery images contain version 2.2.7 or later. Lead
/usr/sbin/waagent -version
to confirm the version installed on the virtual machine. If the virtual machine is running an older versionyou update the guest agent. - CLI de Azure. If requiredyou set up the Azure CLI environmentnot computer.
- The wget command. If you haven't already, drive
sudo apt-get install wget
. - an existingblue subscription.
- 1existing storage account for general useto store data. General storage accounts must support table storage. A blob storage account doesn't work.
- python 2.
Requirements for Python
The Linux diagnostic plugin requires Python 2. If the virtual machine uses a distribution that does not include Python 2 by default, you must install it. The following example commands install Python 2 on various distributions:
- Red Hat, CentOS, Oracle:
yum install -y python2
- Book, Debian:
apt-get install -y python2
- SUSE:
zypper installation -y python2
opython2
executable must have an alias forPiton. Here is a method to specify this alias:
Run the following command to remove all existing aliases.
sudo update-alternatives --remove-all python
Run the following command to create the alias.
sudo update-alternatives --install /usr/bin/python python /usr/bin/python2 1
Installation example
The sample configuration downloaded in the following example collects a standard data set and sends it to table storage. The sample configuration URL and its content can be changed.
In most cases, you should download a copy of the portal configuration JSON file and customize it to meet your needs. Then use templates or your own automation to use a custom version of the config file instead of downloading the URL each time.
Observation
In the following example, fill in the correct values for the variables in the first section before running the code.
Azure sample CLI
# Set your Azure VM diagnostic variables.my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm>my_linux_vm=<your_azure_linux_vm_name>my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data># Log in to Azure before doing anything else. subscription <your_azure_subscription_id># Download the sample public configuration. (You can also use curl or any web browser.) wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json# Create the VM resource I WOULD LIKE. Substitute the storage account name and resource ID in the public settings. " portal_public_settings.jsonsed -i "s#__VM_RESOURCE_ID__#$my_vm_resource_id#g" portal_public_settings.json# Create protected settings (storage account SAS token). expire 2037-12-31T23:59:00Z --wlacu permissions -- resource-types co --services bt -o tsv)my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', 'storageAccountSasToken': '$my_diagnostic_storage_account_sastoken'} "# Finally, tell Azure to install and activate the extension suite .az vm --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group $my_resource_group --vm-name $my_linux_vm --protected -settings "${my_lad_protected_settings}" --settings portal_public_settings.json
Azure CLI sample to install LAD 3.0 on a vm scale set instance
# Establezca las variables de diagnóstico de Azure Virtual Machine Scale Sets.$my_resource_group=<your_azure_resource_group_name_containing_your_azure_linux_vm>$my_linux_vmss=<your_azure_linux_vmss_name>$my_diagnostic_storage_account=<your_azure_storage_account_for_storing_vm_diagnostic_data># Inicie sesión en Azure antes de hacer cualquier otra cosa que contenga la suscripción. Set of accounts .az --subscription <your_azure_subscription_id># Download the sample public configuration. (You can also use curl or any web browser.) wget https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/tests/lad_2_3_compatible_portal_pub_settings.json -O portal_public_settings.json# Create the VM scale set resource id. Replace the storage account name and resource ID in the public configuration. $my_vmss_resource_id=$(az vmss show -g $my_resource_group -n $my_linux_vmss --query "id" -o tsv)sed -i "s# __DIAGNOSTIC_STORAGE_ACCOUNT__# $my_diagnostic_storage_account# g" portal_public_settings.jsonsed -i "s#__VM_RESOURCE_ID__#$my_vmss_resource_id#g" portal_public_settings.json# Create protected settings (storage account SAS token).$my_diagnostic_storage_account_sastoken=$(az account-sastokenstorage=$(az account-sastoken - account name $my_diagnostic_storage_account - -expiry 2037-12-31T23:59:00Z --permissions wlacu --resource-types co --services bt -o tsv)$my_lad_protected_settings="{'storageAccountName': '$my_diagnostic_storage_account', ' storageAccountSasToken': '$my_diagnostic_storage_account_sastoken '}"# Finally, tell Azure to install and activate the extension suite extension.az vmss --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --r esource-group $my_res ource_group --vmss-name $my_linux_vmss --protected-settings "${my_lad_protected_settings}" --settings portal_public_settings.json
PowerShell Example
$storageAccountName = "yourStorageAccountName"$storageAccountResourceGroup = "yourStorageAccountResourceGroupName"$vmName = "yourVMName"$VMresourceGroup = "yourVMResourceGroupName"# Get VM object $vm = Get-AzVM -Name $vmName -ResourceGroupName $VMresourceGroup# Get configuration template GitHub public and update the template values for the storage account and resource ID $publicSettings = (Invoke-WebRequest -Uri https://raw.githubusercontent.com/Azure/azure-linux-extensions/master/Diagnostic/ tests/lad_2_3_portal_compatible_pub_settings .json). Content$publicSettings = $publicSettings.Replace('__DIAGNOSTIC_STORAGE_ACCOUNT__', $storageAccountName)$publicSettings = $publicSettings.Replace('__VM_RESOURCE_ID__', $vm.Id)# If you have customized public settings, you can embed them instead of using the template above : $publicSettings = '{"ladCfg": { ... },}'# Generate a SAS token for the agent to use for authentication with the storage account $sasToken = New-AzStorageAccountSASToken -Service Blob,Table -ResourceType Service, Container, Object -Permission "racwdlup" -Context(Get-AzStorageAccount -ResourceGroupName $storageAccountResourceGroup -AccountName $storageAccountName).Context -ExpiryTime $([System.DateTime]::Now.AddYears(10) )# Create Protected Configuration ( storage account SAS token)$protectedSettings="{'storageAccountName': '$storageAccountName', 'storageAccountSasToken': '$sasToken'}"# Finally install the extension with the settings you created Set-AzVMExtension - ResourceGroupNa me $VMresourceGroup -VMName $vmName -Location $vm.Location -ExtensionType LinuxDiagnostic -Editor Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
Update extension settings
After changing your protected or public settings, deploy them to the virtual machine by running the same command. If the settings change, the updates will be pushed to the extension. LAD reloads the configuration and reboots.
Migrate from previous versions of the extension
The latest version of the extension is4.0.
Important
This extension introduces non-backward-compatible changes to the configuration. This change improved the security of the extension, so backwards compatibility with 2.x could not be maintained. Also, the extension editor of this extension is different from the editor of the 2.x versions.
If you want to migrate from 2.x to the new version, first uninstall the old extension (under the name of the old editor). Then install version 3.
Recommendations:
- Install the extension with automatic partial version upgrade enabled.
- For classic deployment model virtual machines, you specify the version
3.*
if you install the extension via Azure XPLAT CLI or PowerShell. - For virtual machines in the Azure Resource Manager deployment model, include
"autoUpgradeMinorVersion": true
in the VM deployment model.
- For classic deployment model virtual machines, you specify the version
- Use a new or different storage account for LAD 3.0. LAD 2.3 and LAD 3.0 have several minor incompatibilities that make it difficult to share an account:
- LAD 3.0 stores syslog events in a table that has a different name.
- the ropes
counter specifier
forIncorporated
dimensions differ in LAD 3.0.
protected settings
This set of configuration information contains sensitive information that must be protected from public view. For example, it contains credentials for storage. These settings are transferred and stored by the extension in encrypted form.
{ "storageAccountName": "the storage account to receive data", "storageAccountEndPoint": "the cloud hostname suffix for this account", "storageAccountSasToken": "SAS access token", "mdsdHttpProxy": " http proxy configuration", "sinksConfig ": {...}}
Name | Valor |
---|---|
storage account name | The name of the storage account where the extension writes the data. |
storageAccountEndPoint | (Optional) The endpoint that identifies the cloud where the storage account resides. If this setting is missing, the LAD defaults to Azure Public Cloud,https://core.windows.net . If you want to use a storage account in Azure Germany, Azure Government, or Azure China 21Vianet, enter this value as needed. |
storage account SasToken | Notoken KONTO-SASfor blob and table services (ss='bt' ). This applies to containers and objects (srt='co' ). Grants permission to add, create, list, update, and write (sp='acluw' ). Fornowith the leading question mark (?). |
mdsdHttpProxy | (Optional) HTTP proxy information that the extension needs to connect to the specified storage account and endpoint. |
sinksconfig | (Optional) Information about alternative destinations to which metrics and events can be sent. The following sections describe information about each data sink that the plugin supports. |
To retrieve a SAS token on an ARM model, uselistAccountSas
the function. You can find an example template atlist function example.
You can create the required SAS token through the Azure portal:
- Select the general storage account that you want the extension to write to.
- In the left menu belowsettingsYou chooseSubscription for shared access.
- Make selections as described above.
- Choosegenerate SAS.
Copy the generated SAS into the fieldstorage account SasToken
. Removes the leading question mark (?).
sinksconfig
"sinksConfig": { "sink": [ { "name": "sinkname", "type": "sinktype", ... }, ... ]},
osinksconfig
the optional section defines more destinations to which the extension sends the information it collects. Matrixinside
contains one - object for each additional data sink. the attributewrites
determines the other attributes of the object.
Element | Valor |
---|---|
Name | A string that references this receiver elsewhere in the plugin's configuration. |
writes | What type of recipient is defined. Determines the other values (if any) in instances of this type. |
LAD version 3.0 supports two types of receivers:event center
miJsonBlobNumber
.
EventHub Recipient
"sink": [ { "name": "sinkname", "type": "EventHub", "sasURL": "https SAS URL" }, ...]
The mail"sasURL"
contains the full URL, including the SAS token, of the event hub to which the data will be published. LAD requires a SAS to name a policy that enables claim forwarding.
An example:
- Create an Azure Event Hub namespace named
contosohub
. - Create an event hub in the namespace named
syslogmsgs
. - Create a shared access policy in the event hub that enables broadcast notification. institute
writer
name.
If your SAS is valid until midnight UTC on January 1, 2018, the sasURL value might look like this example:
https://contosohub.servicebus.windows.net/syslogmsgs?sr=contosohub.servicebus.windows.net%2fsyslogmsgs&sig=xxxxxxxxxxxxxxxxxxxxxxxx&se=1514764800&skn=writer
More information on generating and retrieving SAS token information for Event Hubs can be found atGenerate a SAS token.
Receptor JsonBlob
"sink": [ { "name": "sinkname", "type": "JsonBlob" }, ...]
Data sent to aJsonBlobNumber
recipients are stored in blobs in Azure Storage. Each LAD instance creates one blob every hour for each recipient name. Each blob always contains a syntactically valid array of JSON objects. New entries are atomically added to the array.
The blobs are stored in a container with the same name as the recipient. Azure storage rules for blob container names apply toJsonBlobNumber
receiver. The name must contain between 3 and 63 alphanumeric ASCII characters or hyphens.
public settings
The Public Settings tree contains several configuration blocks that control the information collected by the extension. Each setting is optional. if you specifyrapazcfg
you must also specifyAccountStorage
.
{ "ladCfg": { ... }, "perfCfg": { ... }, "fileLogs": { ... }, "StorageAccount": "the storage account to receive data from", "mdsdHttpProxy" : " " }
Element | Valor |
---|---|
AccountStorage | The name of the storage account where the extension writes the data. It must be the name given in thoseprotected settings. |
mdsdHttpProxy | (Optional) Equals ithe protected configuration. The public value is replaced with the private value, if specified. Put proxy settings that contain a secret, such as a password, in themprotected settings. |
The following sections contain information on the remaining items.
rapazcfg
"ladCfg": { "diagnosticMonitorConfiguration": { "eventVolume": "Medium", "metrics": { ... }, "performanceCounters": { ... }, "syslogEvents": { ... } }, " muestraRateInSeconds": 15}
structuresrapazcfg
is optional Controls the collection of metrics and logs that are delivered to the Azure Monitor Metrics service and other data sinks. You must enter:
- Some
performance counters
osystem log events
or both. - structures
Metric
.
Element | Valor |
---|---|
event volume | (Optional) Controls the number of partitions created on the storage table. Must be"Grande" ,"Average" o"Little" . If no value is specified, it is"Average" the default value. |
sampleRateInSeconds | (Optional) The default interval between collecting raw (non-aggregated) metrics. The minimum supported sampling rate is 15 seconds. If the value is not specified, it is15 the default value. |
Metric
"métricas": { "resourceId": "/suscripciones/...", "metricAggregation" : [ { "scheduledTransferPeriod" : "PT1H" }, { "scheduledTransferPeriod" : "PT5M" } ]}
Element | Valor |
---|---|
id make resource | The Azure Resource Manager resource ID of the virtual machine or the scale set to which the virtual machine belongs. This setting must also be specified if there isJsonBlobNumber receiver is used in the configuration. |
scheduled transfer period | How often aggregated metrics are calculated and passed to Azure Monitor Metrics. The frequency is expressed as an IS 8601 time slot. The minimum transfer period is 60 seconds, ie PT1M. Enter at least onescheduled transfer period . |
Examples of measures listed inperformance counters
the section is collected every 15 seconds or at the sample rate explicitly defined for the counter. Yes, severalscheduled transfer period
frequencies are shown, as in the example, each aggregation is calculated independently.
performance counters
"performanceCounters": { "sinks": "", "performanceCounterConfiguration": [ { "type": "embedded", "class": "Procesador", "counter": "PercentIdleTime", "counterSpecifier": "/incorporated/ Processor/PercentIdleTime", "condition": "IsAggregate=TRUE", "sampleRate": "PT15S", "unit": "Percentage", "anotation": [ { "displayName" : "Aggregate CPU Idle Time", " locale" : "en-us" } ] } ]}
operformance counters
The optional section controls the collection of metrics. Raw samples are aggregated for eachscheduled transfer periodto generate the following values:
- Middle value
- Minimum
- maximum
- Last amount collected
- Number of raw samples used to calculate aggregation
Element | Valor |
---|---|
sinking | (Optional) A comma-separated list of recipient names to which LAD sends aggregated metric results. All aggregated metrics are published for each recipient on the list. Example:"EHsink1, meujsonsink" . More information can be found atsinksconfig. |
writes | Identifies the actual provider of the metric. |
class | With"worktop" identifies the specific metric in the provider namespace. |
worktop | With"class" identifies the specific metric in the provider namespace. |
counter specifier | Identifies the specific metric in the Azure Monitor Metrics namespace. |
Terms | (Optional) Select a specific instance of the object to which the metric applies. Or select aggregation for all instances of the object. |
sampling rate | The IS 8601 interval that specifies how quickly raw data samples are collected for this metric. If the value is not specified, the collection interval is specified with the valuesampleRateInSeconds. The lowest supported sample rate is 15 seconds (PT15S). |
Unit | Defines the unit of measure. It must be one of these strings:"Tell" ,"Bytes" ,"seconds" ,"Percent" ,"Count per second" , ,"Bytes per second" ."millisecond" Consumers of the collected data expect the values of the collected data to match this entity. KOP ignores this field. |
Display name | The label to associate with the data in Azure Monitor Metrics. This tag is in the language specified by the associated language setting. KOP ignores this field. |
counter specifier
is an arbitrary identifier. Consumers of metrics such as Azure portal charts and notification functionality usagecounter specifier
as a "key" that identifies a metric or an instance of a metric.
ForIncorporated
measurements recommendcounter specifier
we values from/Incorporated/
. If you are collecting a specific instance of a metric, we recommend that you add the instance identifiercounter specifier
The value that
Here are some examples:
/embedded/Processor/Idle Time Percentage
– Average idle time for all virtual processors/embedded/Disk/FreeSpace(/mnt)
- Free space for/mnt
the file system/embedded/disk/free space
– Average free space for all mounted file systems
LAD and Azure portal awaitcounter specifier
it's not that the value should match any pattern. Be consistent in the way you createcounter specifier
host.
when you enterperformance counters
LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs, Event Hubs, or both. But you can't disable data storage in a table.
All LAD instances that use the same endpoint and storage account name add their metrics and logs to the same table. If many virtual machines write to the same table partition, Azure can limit writes to the partition.
The configurationevent volume
causes records to be spread across 1 (small), 10 (medium), or 100 (large) partitions. Usually, medium-sized partitions are enough to avoid throttling.
The Azure Monitor Metrics feature in the Azure portal uses the data in this table to create graphs or trigger alerts. The table name is the concatenation of these strings:
WADMetrics
"scheduled transfer period"
For the aggregate values stored in the tableP10DV2S
- A date in the format "YYYYMMDD", which changes every ten days
The examples areWADMetricsPT1HP10DV2S20170410
miWADMetricsPT1MP10DV2S20170609
.
system log events
"syslogEvents": { "sinks": "", "syslogEventConfiguration": { "facilityName1": "minSeverity", "facilityName2": "minSeverity", ... }}
osystem log events
The optional section controls the collection of syslog log events. If the section is omitted, no syslog events are logged.
The collectionsyslogEventConfiguration
has an entry for each syslog resource of interest. Yeahmin gravity
it is"NEITHER"
for a specific facility, or if that facility does not appear in the element, no events are captured for that facility.
Element | Valor |
---|---|
sinking | A comma-separated list of recipient names to which individual log events are posted. All log events that match the restrictions ofsyslogEventConfiguration is published for each recipient in the list. Example:"EHforsyslog" |
facility name | A syslog installation name, for example"LOG_USER" o"REGISTER\LOCAL0" . More information can be found in the "Plan" section atsyslog man page. |
min gravity | A syslog severity level, for example"LOG_ERR" o"LOG_INFO" . You can find more information in the "Level" section atsyslog man page. The extension collects events sent to the facility at or above the specified level. |
when you entersystem log events
LAD always writes data to a table in Azure Storage. The same data can be written to JSON blobs, Event Hubs, or both. But you can't disable data storage in a table.
The table partition behavior is the same as described forperformance counters
. The table name is the concatenation of these strings:
LinuxSyslog
- A date in the format "YYYYMMDD", which changes every ten days
The examples areLinuxSyslog20170410
miLinuxSyslog20170609
.
performanceCfg
the episodeperformanceCfg
It is optional Control the driving of arbitraryOpen Management Infrastructure (OMI) issues.
"perfCfg": [ { "namespace": "root/scx", "query": "SELECT PercentAvailableMemory, PercentUsedSwap FROM SCX_MemoryStatisticalInformation", "table": "LinuxOldMemory", "frequency": 300, "sinks": "" } ]
Element | Valor |
---|---|
namespace | (Optional) The OMI namespace in which to run the query. If it is not specified it is"root/scx" the default value. It is implemented bySystem Center cross-platform providers. |
DocumentDB | The OMI query to execute. |
tabla | (Optional) The Azure Storage table in the desired storage account. More information can be found atprotected settings. |
frequency | (Optional) The number of seconds between query executions. The default is 300 (5 minutes). The minimum value is 15 seconds. |
sinking | (Optional) A comma-separated list of the names of multiple receivers to which raw sample measurement results will be published. Neither the extension nor Azure Monitor Metrics calculate any aggregation of these raw data samples. |
Some"tabla"
o"sinkholes"
both must be specified.
log file
the episodelog file
controls the collection of log files. LAD collects new lines of text as they are written to the file. Writes them to the table rows and/or to the specified recipients (JsonBlobNumber
oevent center
).
Observation
log file
is registered by a subcomponent of LAD calledcircumstantial
. if you want to collectlog file
check if the usercircumstantial
has read access to the files you specify. The user must also have execute permission for all directories in the path to the file. After installing LAD, you can check the permissions by runningsudo su omsagent -c 'cat /path/to/file'
.
"fileLogs": [ { "file": "/var/log/mydaemonlog", "table": "MyDaemonEvents", "sinks": "" }]
Element | Valor |
---|---|
Archive | The full path to the log file to be monitored and generated. The pathname must name a single file. It cannot name a directory or contain wildcards. the user accountcircumstantial you must have read access to the file path. |
tabla | (Optional) The Azure Storage table where new lines are written for the "tail" of the file. The table must exist in the intended storage account, as described in Protected Setup. |
sinking | (Optional) A comma-separated list of the names of multiple recipients to whom the records are sent. |
Some"tabla"
o"sinkholes"
, or both, must be specified.
Embedded Provider Supported Dimensions
The measurement providerIncorporated
it is a source of more interesting metrics for a broad set of users. These metrics fall into five broad classes:
- Processor
- Remember
- Red
- File System
- Disco
built-in metrics for processor class
The Processor metric class contains information about processor usage on the virtual machine. When the percentages are added, the result is the average of all processors.
If a virtual processor with two virtual processors is 100% busy and the other 100% idle, theIdle Time Percentage
50. If each virtual processor is 50% busy during the same period, the reported result is also 50. If a virtual machine with four virtual processors is 100% busy and the others are idle, the reported valueIdle Time Percentage
75.
Worktop | Sense |
---|---|
Idle Time Percentage | Percentage of time during the aggregation window that processors performed the kernel idle loop |
PercentProcessorTimePercentProcessorTime | Percentage of time that a thread is running that is not idle |
PercentIOWaitTime | Percentage of time to wait for I/O operations to complete |
Outage Time Percentage | Percentage of execution time in hardware or software interrupts and DPCs (deferred procedure calls) |
PercentUserTime | For non-idle time during the aggregation window, the percentage of time spent in normal priority user mode |
PercentageNiceTime | For non-idle time, the percentage was used for a lower priority (okay) |
PercentualPrivilegedTimePercentualPrivilegedTime | For non-idle time, the percentage spent in privileged (core) mode |
The first four counters must add up to 100 percent. The last three counters also add to 100%. These three counters divide the sum byPercentProcessorTimePercentProcessorTime
,PercentIOWaitTime
miOutage Time Percentage
.
To add a single metric for all processors, enter"condition": "IsAdded=TRUE"
. To retrieve a metric for a specific processor, such as the second logical processor for a virtual machine with four virtual processors, enter"condition": "Name=\\"1\\""
. The logical processor numbers are in the range[0..n-1]
.
internal metrics for memory class
The memory measurement class contains information about memory usage, paging, and switching.
Worktop | Sense |
---|---|
available memory | Physical memory available in MiB |
Percentage of available memory | Physical memory available as a percentage of total memory |
Used memory | Physical memory in use (MiB) |
Percentage of memory used | Physical memory used as a percentage of total memory |
pages per second | Total paging (read/write) |
Pages read per second | Pages read from storage, such as swap file, program file, and allocated file |
Pages written per second | Pages written to the repository, such as swap file and allocated file |
exchange available | Unused swap space (MiB) |
PercentAvailableExchange | Unused team space as a percentage of total team |
UsedExchange | Swap space in use (MiB) |
PercentagemUsedSwap | Retail space used as a percentage of total retail |
This measure class has only one instance. the attribute"illness"
it has no useful settings and should be omitted.
built-in metrics for network class
The network metric class contains information about network activity on a single network interface since its creation.
LAD does not expose bandwidth metrics. You can get these metrics from Host Metrics.
Worktop | Sense |
---|---|
BytesTransmitidos | Total number of bytes sent since start |
BytesRecebidos | Total number of bytes received since the beginning |
BytesTotal | Total number of bytes sent or received since the beginning |
downloaded package | Total number of packets sent since start |
packets received | Total number of packets received since start |
TotalRxErrors | Number of errors received since start |
TotalTxErrors | Number of transmission errors since start |
total collisions | Number of collisions reported by network ports since start |
Although the network class is instantiated, LAD does not support the collection of aggregated network metrics on all network devices. To retrieve metrics for a specific interface, such as eth0, type the"condition": "instance id=\\"eth0\\""
.
built-in metrics for the file system class
The Filesystem metric class contains information about file system usage. Absolute values and percentage values are reported as they would appear to a normal (non-root) user.
Worktop | Sense |
---|---|
Free space | Available disk space in bytes |
used space | Disk space used in bytes |
PercentageFreeSpace | free space in percentage |
Percentage of space used | Space used in percentage |
Percentage of free inodes | Percentage of unused index nodes (inodes) |
Percentage of nodes used | Percentage of summed (used) allocated innodes across all filesystems |
BytesReadPerSecond | byte read per second |
Bytes written per second | Bytes written per second |
bytes per second | Bytes read or written per second |
readings per second | read operations per second |
write per second | Write operations per second |
transfers per second | Read or write operations per second |
You can retrieve aggregate values on any file system by specifying"condition": "IsAggregate=True"
. Get values for a specific mounted file system, e.g."/mnt"
, specifying"condition": 'Name="/mnt"'
.
Observation
If you are working in the Azure portal instead of JSON, it isname='/mnt'
the condition field form.
built-in metrics for disk class
The Disk metric class contains information about disk drive usage. This statistic applies to the entire device.
When a device has multiple file systems, the counters for that device are effectively aggregated across all file systems.
Worktop | Sense |
---|---|
readings per second | read operations per second |
write per second | Write operations per second |
transfers per second | Total number of actions per second |
average reading time | Average seconds per read operation |
AverageWriteTime | Average seconds per write operation |
average transfer time | Average seconds per action |
AverageDiskQueueLength | Average number of queued disk operations |
LeerBytesPorSegundo | Number of bytes read per second |
WriteBytesPerSecond | Number of bytes written per second |
bytes per second | Number of bytes read or written per second |
You can retrieve aggregate values across all disks by specifying"condition": "IsAggregate=True"
. If you want to retrieve information from a specific device (for example,/dev/sdf1
) you regret"condition": "Name=\\"/dev/sdf1\\""
.
Install and configure LAD 3.0
CLI de Azure
If your protected settings are in the fileProtected Configuration.jsonand the public configuration information is inPublicSettings.jsonrun the following command.
az vm extension set --publisher Microsoft.Azure.Diagnostics --name LinuxDiagnostic --version 3.0 --resource-group <resource_group_name> --vm-name <vm_name> --protected-settings ProtectedSettings.json --settings PublicSettings.json
The command assumes that you're using Azure Resource Manager mode in the Azure CLI. To configure LAD for VMs with classic deployment model, switch to "asm" mode (Azure asm configuration mode
) and omits the resource group name from the command.
More information can be found atCLI documentation for various platforms.
Power Shell
If your protected setting is in the variable$protected settings
and the public configuration information is in the variable$publicsettings
run this command:
Set-AzVMExtension -ResourceGroupName <nombre_grupo_recursos> -VMName <nombre_vm> -Ubicación <ubicación_vm> -ExtensionType LinuxDiagnostic -Publisher Microsoft.Azure.Diagnostics -Name LinuxDiagnostic -SettingString $publicSettings -ProtectedSettingString $protectedSettings -TypeHandlerVersion 3.0
Example of configuration LAD 3.0
Based on the above definitions, this section contains an example configuration of the LAD 3.0 extension and an explanation. To apply this example to your case, use your own storage account name, account SAS token, and Event Hub SAS token.
Observation
Depending on whether you use the Azure CLI or PowerShell to install LAD, the method for providing public and protected configurations is different:
- If you are using the Azure CLI, save the following configuration toProtected Configuration.jsonmiPublicSettings.jsonto use the example command above.
- If you are using PowerShell, save the following settings to y
$publicsettings
of$protected settings
lead$protectedsettings = '{ ... }'
.
protected settings
Protected settings configure:
- A storage account.
- A corresponding SAS token for the account.
- Multiple recipients (
JsonBlobNumber
oevent center
with the SAS token).
{ "storageAccountName": "yourdiagstgacct", "storageAccountSasToken": "sv=xxxx-xx-xx&ss=bt&srt=co&sp=wlacu&st=yyyy-yy-yyT21%3A22%3A00Z&se=zzzz-zz-zzT21%3A22%3A00Z&sig=falsa_firma" , "sinksConfig": { "sink": [ { "name": "SyslogJsonBlob", "type": "JsonBlob" }, { "name": "FilelogJsonBlob", "type": "JsonBlob" }, { "name ": "LinuxCpuJsonBlob", "tipo": "JsonBlob" }, { "nombre": "MyJsonMetricsBlob", "tipo": "JsonBlob" }, { "nombre": "LinuxCpuEventHub", "tipo": "EventHub", "sasURL": "https://youreventhubnamespace.servicebus.windows.net/yourreventhubpublisher?sr=https%3a%2f%2fyoureventhubnamespace.servicebus.windows.net%2fyoureventhubpublisher%2f&sig=fake_signature&se=1808096361&skn=yourehpolicy" }, { "nome ": "MyMetricEventHub", "type": "EventHub", "sasURL": "https://yourreventhubnamespace.servicebus.windows.net/yourreventhubpublisher?sr=https%3a%2f%2fyoureventhubnamespace.servicebus.windows.net%2fyoureventhubpublisher %2f&sig=yourehpolicy&skn=yourehpolicy" }, { "name": "LoggingEventHub", "type": " Mesmo tHub", "sasURL": "https://youreventhubnamespace.servicebus.windows.net/youreventhubpublisher?sr=https%3a%2f%2fyoureventhubnamespace.servicebus.windows.net%2fyoureventhubpublisher%2f&sig=yourehpolicy&se=1808096361&skn=yourehpolicy" } ] }}
public settings
The public configuration allows LAD to:
- Load metrics by percentage of processor time and metrics by used disk space into the table
WAD metrics*
. - Load messages from the syslog resource
"From the user"
and gravity"information"
untilLinuxSyslog*
table. - Load the raw results of the OMI question (
PercentProcessorTimePercentProcessorTime
miIdle Time Percentage
) for the candidateLinuxCPU
table. - Load added lines in the file
/var/log/myladtestlog
to the tableMyLadTestLog
.
In any case, the data is also sent to:
- Azure blob storage. The container name is as defined in
JsonBlobNumber
o receptor. - The Event Hubs endpoint, as set out in
event center
o receptor.
{ "StorageAccount": "yourdiagstgacct", "ladCfg": { "sampleRateInSeconds": 15, "diagnosticMonitorConfiguration": { "performanceCounters": { "sinks": "MyMetricEventHub,MyJsonMetricsBlob", "performanceCounterConfiguration": [ { "unit": "Percentage", "type": "incorporado", "counter": "PercentProcessorTime", "counterSpecifier": "/incorporado/Procesador/PercentProcessorTime", "anotación": [ { "locale": "en-us", " displayName": "%aggregated CPU utilization" } ], "condition": "IsAggregate=TRUE", "class": "Procesador" }, { "unit": "Bytes", "type": "integrated", " counter ": "UsedSpace", "counterSpecifier": "/incorporado/FileSystem/UsedSpace", "anotación": [ { "locale": "en-us", "displayName": "Disk space used in /" } ] , " condition": "Name=\"/\"", "class": "Filesystem" } ] }, "metrics": { "metricAggregation": [ { "scheduledTransferPeriod": "PT1H" }, { "scheduledTransferPeriod" : " PT1M" } ], "resourceId": "/subscriptions/your_azure_subscription_id/resourceGroups/your_resource_group_name /providers/Microsoft.Com pute/virtualMachines/your_vm_name" }, "eventVolume": "Large", "syslogEvents": { "sinks": "SyslogJsonBlob,LoggingEventHub", "syslogEventConfiguration": { "LOG_USER": "LOG_INFO" } } } }, " perfCfg": [ { "query": "SELECT PercentProcessorTime, PercentIdleTime FROM SCX_ProcessorStatisticalInformation WHERE Name='_TOTAL'", "table": "LinuxCpu", "frequency": 60, "sinks": "LinuxCpuJsonBlob ,LinuxCpuEventHub" } ], "fileLogs": [ { "file": "/var/log/myladtestlog", "table": "MyLadTestLog", "sinks": "FilelogJsonBlob,LoggingEventHub" } ]}
id make resource
In the configuration, it must match the virtual machine or virtual machine scale set.
- Graphs and notifications for the Azure platform know this
id make resource
virtual machine you are working with. Hope to find the virtual machine data with help.id make resource
of the search key. - If you use Azure Autoscale, the configuration must
id make resource
in autoscaling, matchid make resource
which or LAD uses. id make resource
it is embedded in JSON blob names written by LAD.
Datos View Yours
Use the Azure portal to view performance data or set notifications:
Dataperformance counters
it is always stored in an Azure storage table. Azure Storage APIs are available for multiple languages and platforms.
data sent toJsonBlobNumber
recipients are stored in blobs in the storage account named in deprotected settings. You can consume blob data with all Azure blob storage APIs.
You can also use these interface tools to access data in Azure Storage:
- Visual Studio Server Explorer
- Azure Storage Explorer
The following screenshot of an Azure Storage Explorer session shows the Azure Storage containers and tables generated from a successfully configured LAD 3.0 extension on a test VM. The photo does not match the boy.Sample Configuration 3.0exactly.
For more information about using messages published to an Event Hubs endpoint, seeDocumentation of event centers.
Next step
- Create notifications for the metrics you collectoversee azure.
- In tearstracking chartfor your measurements.
- Create a virtual machine scale setusing your measurements to control autoscaling.