Ingesting ONS BICS Weighted Scotland Data into Your Analytics Pipeline
A technical guide to ingesting Scotland-weighted BICS microdata, applying weights, excluding microbusinesses, and automating refreshes.
Why Scotland-weighted BICS belongs in your analytics stack
If you are building a reliable data pipeline for finance-style reporting, a BICS feed can look deceptively simple: download microdata, transform it, publish charts. In practice, the Scotland-weighted version of the Business Insights and Conditions Survey introduces exactly the kinds of issues engineering teams need to design around: modular questionnaires, wave-to-wave schema drift, exclusion rules, and weighting that must be preserved if the output is meant to support forecasting or executive dashboards. The upside is substantial. When handled correctly, BICS gives product, strategy, and operations teams a timely read on turnover, staffing, resilience, prices, and trade conditions in Scotland before slower official datasets catch up.
The key strategic point is that Scotland-weighted estimates are not just a cosmetic re-labeling of the ONS survey. They are a distinct analytical product built from ONS microdata with Scotland-specific weighting and a narrower population scope, so your pipeline must treat them as governed outputs rather than raw survey counts. That distinction matters if you are trying to combine BICS with other public sources in an forecasting model, compare them with regional macro indicators, or automate a time series dashboard that stakeholders will trust over time. In other words, the technical design is part data engineering, part statistical stewardship, and part change management.
For teams already familiar with hybrid analytics workflows and automated model refreshes, BICS is a good test case for disciplined data governance. It is also a good example of why clear provenance beats ad hoc CSV wrangling. You want a pipeline that records the wave, publication version, weighting rule, exclusion logic, and any methodological note that affects comparability. If you do that well, the same dataset can power BI dashboards, internal briefing packs, and statistical features in predictive models without becoming a source of silent bias.
What BICS is, and what the Scotland-weighted release actually means
Survey structure, waves, and modular design
BICS is a voluntary fortnightly survey of UK businesses that asks about turnover, workforce, prices, trade, business resilience, and additional topical modules such as climate adaptation or AI use. The survey was renamed from the Business Impact of Coronavirus survey as the question set expanded beyond the pandemic, and the survey remains modular: not every question appears in every wave. Even-numbered waves tend to contain a core set of questions that support monthly time-series analysis, while odd-numbered waves focus more on rotating topics like trade, workforce, and investment. For engineers, this means your schema cannot assume every column exists in every extract, and your downstream mart must tolerate sparse or missing variables wave by wave.
This modularity creates a classic analytics challenge: how do you keep a stable semantic layer while the source evolves? The answer is to map each wave into a canonical metrics model with versioned field definitions. If you need a broader design reference, the discipline described in high-volatility newsroom workflows is surprisingly relevant: verify fast, label carefully, and avoid overpromising comparability when the underlying questions have shifted. A good BICS pipeline should surface question wording, wave date, and topic type alongside the metric so that analysts can understand whether they are comparing like with like.
Why Scotland-weighted estimates differ from raw ONS microdata
The Scotland-weighted publication is built from ONS microdata but reweighted to produce estimates for Scottish businesses more broadly. That means your processing layer needs to distinguish between raw respondent-level records and the analytic population represented by the weighted release. The Scottish Government publication explicitly notes that the main Scottish BICS results published by ONS are unweighted, which limits them to the businesses that responded. The weighted Scotland estimates are intended to correct that by using the microdata to infer the wider population.
Here is the important operational implication: if you ignore weights, your dashboard may still look numerically plausible, but it will be wrong in ways that are difficult to detect. The risk is especially high for subgroup slices by sector, size band, or region, because response patterns are rarely uniform. If your team already handles governed external sources such as security-aware cloud controls, apply the same mindset here: every derived field should have an explicit lineage note and a documented transformation rule. In analytics terms, weights are not an optional adornment; they are part of the measurement model.
Population scope and the 10+ employee rule
The Scotland-weighted estimates in the Scottish Government publication cover businesses with 10 or more employees. That is a major difference from the UK-wide weighted ONS BICS estimates, which include all business sizes. The reason is practical rather than ideological: the number of survey responses from very small businesses in Scotland is too limited to provide a suitable base for weighting. As a result, your pipeline should always expose the coverage rule in metadata and should never let users assume the Scotland dashboard represents the full business population.
This exclusion also changes the interpretation of trend lines. If a stakeholder asks why the Scotland series does not match a UK series, the answer is not just geography, it is also sample frame and weighting scope. This is where thoughtful documentation pays off. Teams that build robust observability contracts know that outputs are only trustworthy when the contract includes region, retention, freshness, and allowed variance. Apply the same principle to survey analytics: define the contract for what your Scotland metric means before you publish it internally.
Building the ingestion layer: from source acquisition to landing zone
Acquiring wave data and versioning it safely
Your ingestion pattern should start with a source registry, not a download script. Each wave needs a record containing the publication date, wave number, source URL, file hash, retrieval timestamp, and any visible methodology notes. Store the raw file unchanged in a landing zone so that you can reproduce historical outputs if the methodology changes or if your transforms need to be audited later. For regulated or sensitive pipelines, this is similar in spirit to the approach used in a BAA-ready document workflow: preserve source fidelity, minimize manual handling, and log every transformation.
Wave versioning matters because the survey itself changes. Questions can be added, removed, or reworded; response options may shift; and not all waves are analytically equivalent. A robust pipeline should therefore treat every wave as a versioned asset, not as an interchangeable row batch. If you later create a dashboard for executives, users should be able to click a point in time and see exactly which wave it came from and what methodological note applied. That level of traceability is what turns a raw feed into a credible analytics product.
Suggested ETL architecture for engineering teams
A practical architecture for BICS usually has four layers: raw, standardized, weighted, and serving. The raw layer stores source files exactly as received. The standardized layer cleans headers, normalizes field names, parses date columns, and maps categorical labels into a consistent vocabulary. The weighted layer applies Scotland-specific weighting logic and exclusion rules. The serving layer publishes curated tables for BI, notebooks, forecasting jobs, and APIs. This is a familiar pattern if you have already implemented a modern ETL pipeline for finance reporting or built a resilient feed for real-time visibility tooling.
In practice, the biggest design decision is where to implement weighting. If weights are supplied in the microdata, apply them in a reproducible transformation job, not in an analyst notebook. That gives you testability and allows you to store intermediate aggregates for QA. A second decision is whether to materialize wave-level fact tables or directly roll up to weekly or monthly series. For most teams, wave-level materialization is the safer default because it preserves lineage and supports reprocessing if source logic changes. Only after that should you compute dashboard-ready series and model features.
Example ingestion pseudo-flow
A lightweight orchestration sequence might look like this: fetch metadata, download the latest wave files, checksum and archive the raw input, normalize columns, apply inclusion filters, calculate weighted estimates, run validation checks, publish to a curated warehouse schema, and trigger downstream refresh jobs. If you are already using workflow orchestration for event-driven analytics, the mechanics will feel similar to automated market signal pipelines, except here the priority is statistical traceability rather than low-latency trading. The goal is a dependable refresh, not a fast one at all costs.
Pro tip: Keep raw, standardized, and weighted outputs in separate tables. When stakeholders question a chart, the fastest way to resolve it is to compare the exact row counts and weighted totals at each stage of the pipeline.
Applying Scotland-specific weights correctly
Weighted vs unweighted outputs
The distinction between weighted and unweighted outputs is not just technical; it changes the story. Unweighted counts tell you how many responding businesses said something. Weighted estimates tell you what the broader population is estimated to be saying, within the defined scope. For Scotland BICS, that scope is businesses with 10 or more employees, which means your release notes and dashboard tooltips should explicitly label the series as weighted Scotland estimates. If you present the figures as “survey responses,” users will infer the wrong level of generality and may make bad decisions.
One effective quality-control tactic is to store both versions side by side. Keep unweighted counts for QA, anomaly detection, and response-rate monitoring; use weighted estimates for publication. That way analysts can see whether a spike is driven by a small number of respondents or by a broader weighted shift. Teams that manage high-stakes reporting under attention pressure already understand the value of separating signal from presentation. The same idea applies here: operational truth should not be overwritten by polished metrics.
Weight application patterns and common errors
In most survey analytics, the weighted estimate for a proportion is the weighted sum of positive responses divided by the weighted sum of valid responses. For example, if you are estimating the percentage of firms reporting reduced turnover, you should only include respondents with valid answers in the denominator, and you should use the supplied weights at the respondent level. Common mistakes include averaging percentages across waves, applying weights after aggregation, or reweighting already weighted estimates. Any of those can produce outputs that look stable while drifting away from the intended methodology.
Another common error is to ignore the sample design when slicing by subgroup. If a subgroup has very few respondents, weights can amplify noise rather than reduce it. That is why QA should include minimum base thresholds and suppression rules. You can model this like a risk policy used in high-risk access governance: certain actions are allowed only when the evidence is strong enough. In survey terms, do not publish a subgroup estimate if the base is too thin to support reliable inference.
Handling rounding, suppression, and uncertainty
Published survey series often round percentages, which makes QA trickier because small differences may be artifacts of rounding rather than real changes. In your warehouse, store both the exact computed value and the rounded display value. If the source includes suppression or disclosure rules, preserve them as flags and propagate them through your serving layer. This is especially important when the chart feeds a leadership dashboard, because suppressed values should not reappear via derived calculations or drilldowns.
For forecasting models, the best practice is to ingest the exact weighted value, not the rounded display value, while still carrying the round-to-display field for dashboard rendering. If you are building decision support around macro indicators, this mirrors the logic behind data center investment KPI discipline: one number may be for finance, another for operators, but the provenance must remain clear. Treat rounding as a presentation concern, not a source of truth.
Exclusions, scope rules, and how to encode them in your pipeline
Microbusiness exclusion and why it matters
The Scotland-weighted estimates exclude businesses with fewer than 10 employees because the sample is too small to support a suitable weighting base. That exclusion is not a footnote; it changes the interpretation of trends, especially in sectors where microbusinesses dominate. If your stakeholders operate in services, hospitality, or local retail, they may assume the figures represent the whole market, when in reality the series is closer to a larger-employer lens. Your metadata should make that unavoidable.
One useful pattern is to create a data contract table that records each rule alongside the wave or metric it affects. Example fields include population_scope, minimum_employee_threshold, excluded_sic_sections, survey_modality, and comparability_notes. This is the same discipline that makes glass-box systems trustworthy: decisions should be explainable after the fact. In a BICS pipeline, the explanation is your audit trail.
Excluded sectors and coverage gaps
The survey excludes the public sector and SIC 2007 sections A, D, and K: agriculture, forestry and fishing; electricity, gas, steam and air conditioning supply; and financial and insurance activities. If your business intelligence platform spans multiple industries, make sure users know that a Scotland BICS trend is not universal across the economy. A dashboard that blends covered and excluded sectors without annotation will invite misinterpretation. That risk grows when teams use the series as features in machine learning, because models can silently overlearn the coverage bias.
This is another place where source-based documentation matters. Link your internal dictionary back to the source publication, and if possible render exclusions directly in tooltips or dataset descriptions. Teams who build procurement or compliance systems know that the fastest way to reduce incident rates is to make the constraints visible at the point of use. The same principle applies here: annotate exclusions where analysts will actually see them.
Time-period comparability and modular questions
Because BICS is modular, some questions are asked for the live survey period while others reference the most recent calendar month or a different recall window. That means two visually similar metrics may not be comparing the same time horizon. Your transformation layer should normalize these into explicit period_type fields, and your semantic layer should prevent accidental joins across incompatible windows. If you skip this, you may create “trend lines” that are really a mixture of weekly, monthly, and survey-period responses.
For teams used to fast-moving operational data, this is similar to the challenge described in last-mile testing: the system may work in theory but fail under real-world variation. Survey time windows are another kind of real-world variation. Your pipeline should assume that period semantics can vary by question and wave, and should enforce that distinction in code rather than relying on analyst memory.
Quality assurance, validation, and data governance
Build QA around consistency checks, not just row counts
Row counts alone are too weak for survey ETL. A robust QA suite for BICS should verify the presence of expected waves, check that weight totals are within reasonable bounds, compare current outputs to prior published releases, and flag unexpected category emergence or disappearance. You should also compare weighted and unweighted distributions for major segmentations to detect coding mistakes. If a wave suddenly shows impossible shifts in the share of large firms or a category’s proportion jumps because of a join error, the QA layer should stop publication rather than letting the issue reach the dashboard.
That approach is aligned with the principles behind trust metrics: credibility is measurable, not assumed. In data engineering terms, trust comes from reproducible checks, not from the elegance of a chart. A well-designed pipeline should emit QA artifacts with every refresh so that reviewers can see why the build passed.
Metadata, lineage, and reproducibility
Metadata is not optional when your dataset is both statistical and operational. Store the source URL, retrieval timestamp, wave number, release version, population scope, employee threshold, excluded industries, and transformation code version. Then expose that metadata in your BI catalog and notebook environment. When an analyst asks why the figure changed between two dates, you should be able to answer in minutes, not days. This becomes especially valuable if a methodological note changes or if a wave is corrected after publication.
Good lineage practices also reduce internal friction. Instead of debating whether a number is “right,” teams can inspect the pipeline and see exactly which logic produced the output. That is the same payoff enterprises get from vendor diligence workflows: evidence replaces guesswork. With BICS, the evidence chain should run from source publication to raw archive to transformed metric to dashboard tile.
Governance for sharing, retention, and access
Although BICS microdata is public-sector data, your internal processing may still involve controlled access, particularly if you enrich it with proprietary datasets or use it in commercial models. Define who can access raw microdata, who can see respondent-level QA outputs, and who can publish weighted estimates. Retention policies should keep raw and transformed artifacts long enough to support reproducibility and auditing, but not so long that old intermediates create confusion. If you operate across regions, align the data residency and observability conventions with your broader governance program.
That governance mindset is increasingly important as teams build mixed public-private analytics stacks. The lessons from sovereign observability apply well: define what can be moved, where it can live, and how freshness is verified. Your BICS pipeline should be just as explicit about what data is stored, what is derived, and what is safe to export into downstream reporting tools.
How to automate refreshes for dashboards and forecasting models
Scheduling, triggers, and idempotency
Because BICS is released on a regular wave cycle, the refresh process should be automated but not blindly so. Use a scheduled job to poll for new waves or updated files, then trigger downstream transformations only when the source checksum changes. Make the pipeline idempotent so reruns do not duplicate records or produce inconsistent outputs. If a wave arrives late or is republished, your job should replace the affected version cleanly while preserving the prior artifact for audit purposes.
Many teams find it useful to separate “ingest new source” from “recompute downstream metrics.” That allows you to re-run the analytical layer without redownloading the world, and it gives data scientists a stable interface for model retraining. If you have already implemented rapid CI/CD release discipline, the same release mechanics apply here: build, test, promote, and rollback must be explicit. A BICS refresh should behave like a controlled deployment, not like a one-off script.
Serving charts, tables, and model features
For dashboard use, create a time-series fact table with fields for wave, metric, weighted_value, unweighted_base, confidence flags, and comparability notes. That table can drive a BI dashboard with filters for sector, size band, and topic area. For forecasting, create a feature table that aligns the BICS wave with your model’s temporal grain, such as weekly, monthly, or quarter-start. Keep display labels separate from model features so chart wording does not leak into the feature space. If your model uses lagged features, document exactly how each lag is calculated.
Strong teams also publish a data quality dashboard for the BICS pipeline itself. That internal dashboard should show freshness, wave count, row-level exceptions, and any suppressions applied to current outputs. This is the same operational discipline used in regulated device DevOps: if a build changes, the validation evidence must change with it. Your analytics stack should be no less rigorous.
Example refresh logic in pseudocode
At a high level, a job might execute the following logic: check the registry for a new wave, download the source package, verify hash, store raw artifact, standardize columns, apply Scotland inclusion rules, calculate weighted estimates, write curated tables, run QA checks, and then refresh dashboard extracts or model features. If QA fails, notify Slack or email and halt promotion. If QA passes, store the release version and emit a changelog entry. This produces a stable, inspectable lifecycle for each wave.
For teams that are trying to reduce manual spreadsheet work, this is the point where automation delivers real leverage. You avoid copy-paste errors, maintain a full revision history, and make it possible to refresh multiple downstream uses from one source of truth. The business value is similar to what teams see when they replace manual reporting with automated signal generation: faster decisions, fewer handoffs, and much better reproducibility.
Comparison: raw survey counts, ONS UK weighted results, and Scotland-weighted estimates
When stakeholders ask which BICS series they should use, the answer depends on the decision they are making. The table below helps distinguish the three common layers you may expose in your warehouse or dashboard. The critical point is that all three can coexist, but they should not be used interchangeably. Each serves a different analytical purpose, from QA to publication to regional decision support.
| Dataset type | Population scope | Weighting | Best use | Main caveat |
|---|---|---|---|---|
| Raw respondent counts | Businesses that answered the wave | No | QA, response monitoring, audit trails | Not representative of the wider population |
| ONS UK weighted BICS | UK businesses across sizes | Yes | National trend analysis | Not Scotland-specific and may not match local coverage |
| Scotland-weighted BICS | Scottish businesses with 10+ employees | Yes | Regional dashboards and policy briefings | Excludes microbusinesses and some SIC sections |
| Unweighted Scotland results | Scottish respondents only | No | Exploratory review of respondent behavior | Cannot infer the broader Scotland population |
| Derived forecast features | Depends on modeling window | Usually yes, after transformation | Forecasting, nowcasting, scenario analysis | Must preserve time alignment and comparability rules |
If you also track external indicators, you can enrich BICS with other public series, but you should do so carefully and with explicit lag logic. The same engineering discipline used in public-data location analytics helps here: useful outputs depend on matching the right population, the right time window, and the right measurement frequency.
Implementation checklist for engineering teams
Minimum viable production checklist
Before you expose any BICS metric to end users, confirm that your pipeline has a raw archive, a metadata registry, a weighting transform, a validation suite, and a documented refresh schedule. Make sure the Scotland scope and 10+ employee threshold are visible in the semantic model. Store both weighted and unweighted values so QA can compare them. Add a release note field for methodological changes and a link back to the source publication.
Next, define ownership. Someone should own source monitoring, someone should own transformation logic, and someone should own publishing and rollback. If that sounds like overkill, remember that survey data often becomes a reference point in executive decisions long after the original analyst has moved on. The process discipline you might use for high-visibility reporting is the right standard here too, because BICS can influence staffing, procurement, and investment discussions.
What to monitor after launch
Once the pipeline is live, monitor freshness, source changes, weight distribution shifts, null spikes, suppression counts, and comparison deltas against the prior wave. Add alerts for missing waves or file format changes. If your dashboard consumers rely on weekly or monthly refresh expectations, publish those SLAs explicitly. A transparent refresh policy will do more to build adoption than an extra chart ever will.
Also watch for analytical drift. If a segment suddenly becomes more volatile, that may reflect the source survey rather than the business reality. In those cases, pair the chart with a note explaining whether the current wave included a new module, a changed question, or a narrower valid-base population. That level of nuance is what distinguishes a reliable analytics platform with strong UX from a brittle spreadsheet dump.
Practical use cases: dashboards, forecasting, and leadership reporting
Time-series dashboards for operators
A well-built Scotland BICS dashboard should answer practical questions quickly: Are more firms reporting turnover pressure? Are staffing shortages easing? Are price expectations cooling? Use a small set of KPI tiles, a wave-aligned trend chart, and filters for sector and employee band. Include a methodology panel that explains weighting, exclusions, and the 10+ employee coverage rule. If users have to search for the definition, the dashboard is not doing its job.
This is also where internal links to governance or KPI content can support your internal knowledge base. For example, teams that already care about investment KPIs or reporting bottlenecks will appreciate a dashboard that is both fast and interpretable. The best dashboards do not just show movement; they explain why the movement can be trusted.
Forecasting models and scenario analysis
BICS can be valuable as a leading indicator in models for demand, labor pressure, or business confidence. The key is to align the wave date to the model’s feature timestamp carefully. Use lagged values, rolling averages, or change-from-baseline features rather than raw wave numbers alone. Because the survey is modular, only include features from question families that are sufficiently comparable across time. Store feature provenance so model retraining remains reproducible.
For teams experimenting with nowcasting, this is similar to the mindset behind real-time forecasting for small businesses, except the challenge is less about speed than it is about comparability and survey design. A strong model will treat BICS as one signal among many and will not overfit to a single wave’s noise. You should also test model stability when a wave is republished or a question wording changes.
Executive reporting and narrative context
Leadership consumers usually want the headline and the implication. That means your outputs should combine a simple visual trend, a short interpretation, and a source note. Avoid jargon unless the audience needs it, but do not strip away the methodological caveats. A one-sentence explanation such as “weighted estimates cover Scottish businesses with 10 or more employees and exclude microbusinesses and certain SIC sections” can prevent costly misunderstanding. If you are publishing to a wider audience, this is a trust issue, not just a formatting choice.
That kind of clarity is consistent with the best practices seen in trust-focused editorial systems. People remember the charts, but they rely on the caveats. When in doubt, make the methodological context visible where the decision is made.
Frequently asked questions
What is the main difference between unweighted ONS Scotland results and Scotland-weighted estimates?
Unweighted results reflect only the businesses that responded to the survey, while Scotland-weighted estimates use ONS microdata and weighting to estimate conditions for the wider Scottish business population within the defined scope. For Scotland, that scope is businesses with 10 or more employees, so the weighted release is more representative for regional analysis and dashboards.
Why are microbusinesses excluded from the Scotland-weighted series?
They are excluded because the number of survey responses from businesses with fewer than 10 employees in Scotland is too small to provide a suitable basis for weighting. Including them would make estimates less stable and less reliable. Your pipeline should clearly encode this rule so users do not assume the series covers all Scottish businesses.
Can I use the Scotland BICS series directly in a forecast model?
Yes, but only after you align wave timing, preserve weighting logic, and account for modular questions and comparability changes. In most cases, it is better to use lagged or smoothed features than raw wave values. You should also keep the exact weighted outputs separate from rounded display values to avoid introducing unnecessary error.
How should I handle waves where a question was not asked?
Represent the missingness explicitly rather than imputing by default. Not every BICS wave contains the same modules, so absence often means the question was not in field, not that the answer is zero. Your semantic layer should distinguish between not asked, not answered, suppressed, and valid zero.
What are the biggest QA mistakes teams make with survey data?
The most common mistakes are applying weights after aggregation, mixing incompatible time windows, ignoring exclusion rules, and failing to version methodology changes. Another frequent problem is relying on row counts only. Good QA for BICS should compare weighted and unweighted distributions, inspect base sizes, and validate that each wave matches the expected release pattern.
How often should I refresh a BICS dashboard?
Refresh on the release cadence of the source data, but trigger downstream processing only when the new wave or update is actually available. For most teams, that means automated polling or scheduled ingestion with idempotent rebuilds. The right goal is dependable freshness, not unnecessary churn.
Conclusion: turn BICS into a governed, reusable analytics asset
Scotland-weighted BICS data is highly valuable precisely because it is not a simple download-and-chart exercise. It requires disciplined handling of microdata, weights, exclusions, modular questions, and publication metadata. If you approach it as a governed analytical product, it can become a dependable input for dashboards, briefing packs, and forecasting models. If you treat it like a plain CSV, it will eventually produce confusion, mistrust, or a misleading trend line.
The winning pattern is straightforward: archive raw sources, version every wave, apply Scotland-specific rules in code, validate aggressively, and publish only after QA passes. That approach mirrors the best practices you would use in any modern analytics stack that values reproducibility and trust. For deeper operational inspiration on building resilient pipelines and evidence-based reporting, it is worth revisiting secure workflow design, validated deployment patterns, and governed observability standards. Those same ideas make public survey data far more useful once it enters your warehouse.
Related Reading
- Designing Consent-Aware, PHI-Safe Data Flows Between Veeva CRM and Epic - Useful for teams that need strong governance in sensitive data pipelines.
- Glass-Box AI Meets Identity: Making Agent Actions Explainable and Traceable - A strong complement to lineage and auditability thinking.
- From Certification to Practice: Turning CCSP Concepts into Developer CI Gates - Great for applying control frameworks to analytics systems.
- Newsjacking OEM Sales Reports: A Tactical Guide for Automotive Content Teams - Helpful if you want to translate data releases into narrative reporting.
- DevOps for Regulated Devices: CI/CD, Clinical Validation, and Safe Model Updates - Relevant for release discipline, validation, and safe automation.
Related Topics
Alex Morgan
Senior SEO Editor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.
Up Next
More stories handpicked for you
Converging GRC, SCRM and EHS for Healthcare IT: Architecting a Unified Risk Platform
Productizing an EHR: How to Build an API-First, Extensible Platform Without Losing Compliance
Guarding User Privacy: Lessons from the Pixel Voicemail Bug
Redesigning User Interactions: Best Practices for Modern Share Sheets
Harnessing User Context: Building AI-Powered Features for Your Apps
From Our Network
Trending stories across our publication group