Understand SAS, keys, storage firewalls, Azure Files access, blob lifecycle settings, versioning, and soft-delete behavior for AZ-104.
This objective area is where AZ-104 turns storage into a real operations problem. The question is not only whether data exists. It is whether the right clients can reach it, whether access can be limited safely, and whether accidental deletion or age-based sprawl is already under control.
Microsoft expects you to configure storage firewalls and virtual networks, create and use SAS tokens, configure stored access policies, manage access keys, and configure identity-based access for Azure Files. It also expects you to work with blob containers, file shares, storage tiers, versioning, lifecycle rules, and soft-delete features for both blobs and Azure Files.
Use the narrowest access mechanism that still fits the client. A SAS is often safer than exposing account keys broadly. Private endpoints provide the strongest private-access model for many PaaS scenarios, but they usually require private DNS work. Service endpoints are lighter but keep the service on a public endpoint. Soft delete, snapshots, versioning, and lifecycle rules all protect data in different ways, so choose the feature that matches the failure mode you care about.
AZ-104 often groups these services together under storage administration, but the access patterns are not interchangeable.
| Service | Administrative focus | Common exam distinction |
|---|---|---|
| Blob Storage | Containers, object access, tiers, lifecycle, versioning, immutability, SAS | Usually tests data-plane controls and retention behavior |
| Azure Files | File shares, SMB or NFS access, identity-based access, backup and snapshots | Often tests share access models and file-service administration rather than object access |
If the question sounds like application object storage, think blob first. If it sounds like a file share replacement or shared file path, think Azure Files first.
The classic misses are overusing account keys, forgetting that private endpoints require name resolution to line up, and assuming lifecycle management or versioning behaves like a full backup strategy. Another trap is treating Azure Files and Blob Storage as interchangeable when their access and admin models differ.
| Need | Strongest first choice | Why |
|---|---|---|
| Delegate narrow temporary access to storage data | SAS | Limits scope and duration better than sharing account keys |
| Allow trusted VNet-based access while keeping the service on its public endpoint | Service endpoint | Lightweight network restriction pattern |
| Put the service behind a private IP in your VNet | Private endpoint | Stronger private-access model for many PaaS data paths |
| Recover from accidental deletion or overwrite in blob workflows | Versioning and soft delete | Protects common operator mistakes |
| Age data down automatically | Lifecycle rules | Moves or deletes data by policy rather than manual cleanup |
| Tool | Best use | Common mistake |
|---|---|---|
| Account key | Full account-level access when key-based auth is explicitly required | Treating it as the default for routine delegated access |
| SAS token | Narrow, time-bound delegated access to storage data | Forgetting that a loose SAS can still be too broad |
| Stored access policy | Central control over SAS constraints on supported storage objects | Assuming it is the same thing as the SAS token itself |
| Azure Files identity-based access | File-share access tied to identity instead of shared keys only | Treating Azure Files authorization exactly like blob SAS authorization |
This example covers three objective areas at once: container creation, key awareness, and data movement tooling.
1# Create a blob container by using Microsoft Entra authentication
2az storage container create \
3 --account-name stgexamdemo01 \
4 --name appdata \
5 --auth-mode login
6
7# Review account keys only when the scenario explicitly depends on key-based access or rotation
8az storage account keys list \
9 --resource-group app-rg \
10 --account-name stgexamdemo01
11
12# Move data by using AzCopy and a scoped SAS
13azcopy copy ./seed-data "https://stgexamdemo01.blob.core.windows.net/appdata?<sas-token>" --recursive=true
What to notice:
When a storage workload stops working after you tighten network access, check these in order:
privatelink path.That order also helps with Azure Files. If the share is expected to resolve and travel privately but the name still points to the public path, the storage problem usually starts in DNS rather than in share configuration.
Continue into Compute once storage access patterns feel natural.