Inputs, formulas, and how to interpret outputs.

Turn observations into defensible requirements. The worksheet structures vehicle classes (223), run-up distance/speed (222, 224), and approach angles (225), then applies sensitivity factors (228) with validation checks. Outputs feed the selection spine (432, 443) and array rules (232, 321–326). Exports file cleanly (911) so later reviewers can confirm how HVM bollard or crash rated bollard ratings were chosen (411–414). Include one-sentence context that naturally links upward to the parent hubs (this section and the chapter hub). Add SIRA context with a link to SIRA Bollards (UAE) when relevant. Link installation pages only if helpful: What to Expect and Installation Guide.

Important: This is a general guide. For live projects we develop a tailored Method Statement & Risk Assessment (MS/RA) and align with authority approvals (e.g., SIRA) where in scope.

913.1 Scenario table

List vehicles, run-up distances, surfaces, gradients (223, 222, 224). Drives HVM bollard rating needs.

The scenario table is the spine of a Vehicle Dynamics Assessment. Create one row per credible vehicle class and approach path, noting run-up distance, surface condition, gradient, and constraints that affect speed attainment. Keep columns lean: class, unladen mass, run-up, surface, gradient, observed speed aids/limits, and a photo link. This table later ties directly to rating choices in the selection guide.

Use consistent units and naming aligned with File Index & Naming Rules. Add a “confidence” flag for each input. If a path is borderline feasible, log it but mark for peer review; downstream arrays in Arrays & spacing depend on the realism of this first pass.

AspectWhat mattersWhere to verify
PerformanceTested system (bollard + footing)Crash Ratings Explained
OperationsDuty cycles, fail-state, safetyInstallation Guide

913.2 Angle & vector log

Record worst credible angles with photos (225). Inputs affect crash rated bollard arrays (321–324).

For each approach, capture a wide→detail photo set and annotate the approach vector and likely defend line. Log the governing angle (worst credible) and any glancing variants; these determine array density at corners, doors, and pinch points per Corners, Islands & Pinch Points.

Include a simple vector diagram with labeled standoff and impact point. The angle log should call out any visibility constraints or geometry that would force a chicane-like path, which may reduce speed or change impact orientation; cross-reference in Frontage protection and Array patterns.

913.3 Sensitivity bands

± ranges for speed/mass/angle (228). Transparency helps acceptance (444).

Turn point estimates into bands. For speed, apply a conservative ±Δ informed by Sensitivity & Safety Factors and site friction cues; do likewise for mass and angle. State the acceptance band for each output (e.g., “tier holds for +10% speed, −5° angle”). This makes the recommendation auditable and aligns with evidence & documentation expectations.

Use the sensitivity band to flag thresholds where selection flips between Low-Speed vs HVM. If a small input change forces a higher tier, note it explicitly so reviewers understand the margin of safety.

913.4 Terrain & calming

Catalogue features reducing speed (227). Evidence may lower rating demand (432).

List terrain breaks (tight bends, adverse camber, steep ramps), friction changes, and traffic calming that are already present or can be introduced. Where credible, these can justify a lower approach speed band. Back claims with photos and measurements; align with design selection guidance.

If your project is in the UAE and requires authority sign-off, summarize the calming rationale in the submittal pack and reference SIRA Bollards (UAE) to streamline approvals.

913.5 Multi-hit assumptions

Time gaps, sequence notes (226). Impacts array spacing (232).

Define whether scenarios include multi-vehicle or sequenced impacts and the assumed time gap. This directly affects spacing rules and may require denser patterns or mixed-type arrays per Mixed-Type Arrays. Note any “degraded state” allowances (e.g., residual set after first impact) and how you maintain the egress width.

Record whether the assumption is design-basis or sensitivity-only. If multi-hit governs, highlight implications for maintenance and lifecycle planning.

913.6 Standoff zones

Minimums around assets (213). Confirms frontage layout (234).

Mark measured stand-off distance requirements and any protection zones around doors, glazing, queues, or critical assets. This validates that the proposed defend line and frontage layout leave adequate standoff after dynamic movement.

Include any authority-mandated offsets and operational constraints (e.g., blue-light access or turning clearances). Where standoff is tight, link to Door protection arrays and Clear-gap calculations.

913.7 Notes & caveats

Assumptions, missing data, risks. Keeps reviewers aligned (717).

Maintain an assumptions register and capture data gaps, measurement limitations, and any conflicts between sources. Add concise risk notes where inputs are provisional, with a plan to resolve prior to design freeze. This section becomes part of the submission narrative and helps prevent misinterpretation later.

Close with an explicit statement of comparability: list which paths are representative and which are outliers requiring separate treatment or additional controls (e.g., event chicanes or stewarded gates).

913.8 Result summary

Recommended tier/rating and rationale (123, 414). Bridges to selection (432).

Present the recommended rating tier (e.g., IWA 14-1 class & speed) and the rationale: governing run-up, angle, and sensitivity outcome. Flag any array-level rules that flow from it (e.g., max clear-gap, corner treatments). Link forward to Design selection guide: HVM vs Low-Speed and, if selection favors low-speed, to Selecting Low-Speed vs HVM.

Keep language consistent with standards equivalency so reviewers can map between frameworks cleanly.

913.9 Export to 229

Generate tables/figures for report assembly. Traceable to 911 naming.

Use the worksheet’s export to pre-format tables and vector figures for the VDA Report Template. Filenames must follow File Index & Naming Rules so reviewers can cross-check numbers later. Where applicable, embed thumbnails of the angle log and standoff diagrams for quick scanning.

Before issue, run a light peer review against the method overview in VDA method Overview and ensure downstream pages (e.g., Array Patterns, Spacing Rules) are reflected in the narrative where they influence outcomes.

Related

External resources

913 VDA Worksheet — FAQ

What goes in the scenario table?
List each credible approach: vehicle class, unladen mass, run-up distance, surface and gradient, any speed-limiting features, and a photo link. Keep one row per path so reviewers can trace how each input affects rating choice.
How do I pick the governing impact angle?
Use the worst credible angle informed by site geometry and driver desire lines. Support it with annotated photos and a simple vector diagram; if two angles are plausible, carry both into the sensitivity band and state which governs.
When does sensitivity analysis change the selected tier?
When a small increase in approach speed (or more direct angle) crosses a standard’s threshold. In that case, record the flip point, explain the margin of safety, and note any calming features that keep you on the lower tier.
Do I need to include multi-hit scenarios?
Include them if they are credible for the site or required by project scope. Note the time gap and sequence; multi-hit can drive denser arrays or mixed types and has implications for maintenance and availability.