SSAS Data Types In Multidimensional Cubes: Best Practices

The Role Of Data Types In SSAS Multidimensional Cubes And Best Practices

Ready to start learning? Individual Plans →Team Plans →

Data Types in SSAS Multidimensional cubes are not a housekeeping detail. They decide whether a measure aggregates correctly, whether a date hierarchy behaves, whether a key joins cleanly, and whether your reports tell the truth or produce expensive confusion.

Featured Product

SSAS : Microsoft SQL Server Analysis Services

Learn how to build reliable BI models with Microsoft SQL Server Analysis Services to create consistent, governed measures and semantic layers for accurate insights

View Course →

If you have ever seen a sales cube return a wrong total, a fiscal calendar sort in alphabetical order, or a dimension refuse to process because a “date” arrived as text, you already know the problem. The difference between a stable cube and a fragile one often comes down to how carefully you handle Data Types, Cubes, and Data Modeling Standards from source to semantic layer.

This article explains how SSAS Multidimensional interprets types, why type choices affect processing and query behavior, what to use for measures and dimension keys, and what to avoid when building enterprise BI models. It also covers how to validate type choices before they break reporting. The practical standard here is simple: define the meaning first, choose the type second, and verify the result in the cube after every major change.

Good cube design is not just about relationships and aggregations. It starts with whether the model can trust the values it is storing, comparing, and summarizing.

Understanding Data Types In SSAS Multidimensional Cubes

SSAS Multidimensional treats data types as part of the cube’s behavior, not just metadata. A measure is expected to behave like a number, a dimension attribute might act like a label or a key, and a member property may need text, date, or numeric semantics depending on how users consume it. If those assumptions do not line up with the source data, the cube can still process, but the results may be misleading.

The critical distinction is between the relational source, the Data Source View, and the cube itself. The source system may store a value as nvarchar, the DSV may expose it as a converted date, and the cube may present it as a time intelligence dimension member. That chain matters because errors can enter at any point. A value that looks clean in SQL Server may still sort incorrectly or fail relationship resolution once SSAS applies its own semantic rules.

How SSAS interprets different value types

Numeric values drive aggregation, comparisons, and calculations. Currency values need precision and predictable rounding. Date/time values support hierarchies, time intelligence, and ordering. Text values are usually used for captions, codes, or descriptive attributes. Boolean values often appear as flags or status indicators. Values that look like identifiers, including GUID-like strings, are treated very differently from integers because they are not naturally sortable in a business-friendly way.

  • Measures: need mathematically meaningful types.
  • Keys: need stability and uniqueness more than readability.
  • Attributes: need consistent typing so hierarchy and grouping logic works.
  • Properties: need types that match how the property will be displayed or filtered.

Data type also affects sort order and comparisons. A numeric code stored as text will sort lexicographically, so 100 may appear before 20. A date stored as a string may group by the wrong culture format. A key typed inconsistently across tables can block attribute relationship resolution even when the values appear identical to a human reader.

When you design around Data Modeling Standards, you are defining what the business means by the value, not just what SQL can store. That is the difference between a technically valid cube and a useful one. Microsoft documents the SSAS processing and model behaviors in Microsoft Learn, which is the right place to verify how a property or attribute should be typed in the platform.

Why Data Types Matter For Cube Accuracy And Performance

Bad typing creates bad answers. If a fact column that should hold a number arrives as text, SSAS may not aggregate it correctly or may require a conversion path that breaks during processing. If a null is introduced where the cube expects a valid numeric member, totals can shift or disappear. If a date is misread as an integer surrogate without the proper dimension design, time intelligence calculations may produce nonsense.

Performance suffers too. Oversized data types increase storage footprint and can slow processing. Text-heavy keys and high-cardinality attributes reduce compression efficiency, increase memory pressure, and create more work during queries. The effect is not always dramatic on a small test cube, which is why teams sometimes miss it until production volume grows.

How typing affects storage, compression, and caching

SSAS Multidimensional benefits from compact, consistent structures. Smaller integer keys are easier to encode and compress than long strings or GUIDs. Better compression means faster processing and smaller segment storage. That also improves cache efficiency because the engine spends less time moving unnecessary bytes around. In practical terms, a well-typed cube is easier to process nightly and more responsive under concurrent reporting load.

The problem becomes obvious in real-world examples. A sales amount stored as a varchar forces conversion before aggregation. A date stored as an integer without a properly mapped calendar dimension loses the semantics needed for drilldown by year, month, and day. A product code padded with inconsistent spaces may create duplicate-looking members. These are not edge cases. They are common migration mistakes.

  • Incorrect typing can cause wrong totals.
  • Oversized types can slow processing.
  • Inconsistent keys can break relationships.
  • String-based facts can damage aggregation logic.

For broader context on the business impact of data quality and analytics failures, the NIST data management guidance and the CIA-model-style focus on integrity are useful references. For workforce and modeling expectations, NIST remains a solid standards source, while the BI operating reality is reinforced by enterprise reporting practices seen in tools built on SSAS. If your cube feeds finance or operations dashboards, type accuracy is a reliability issue, not a formatting issue.

Choosing The Right Data Types For Measures

Measure types should reflect business meaning first. A count should usually be an integer. A monetary amount should use a fixed-precision decimal or currency-based type. A ratio or percentage may need decimal precision, but you must decide whether rounding at the source, in the DSV, or in the cube best matches reporting requirements. The key question is simple: what does the business expect this number to mean?

For financial reporting, Currency is often the safest choice because it preserves precision better than floating-point types in scenarios where exact cents matter. If you are summing invoices, tracking budget variance, or comparing ledger totals, avoid approximate numerics unless the source system already treats them as approximate. A decimal type with a defined scale is usually better than a float for accounting use cases.

When integer, decimal, currency, or float makes sense

Use integers for whole-unit measures such as units sold, tickets closed, or assets counted. Use decimals for fractional quantities such as weights, conversion rates, or engineered ratios. Use currency for money when you need precise aggregation and minimal rounding drift. Use floating point only when the value is inherently approximate, such as scientific telemetry or some statistical calculations where absolute precision is not the business requirement.

Type Best use
Integer Whole counts and discrete quantities
Decimal/Currency Financial values and fixed precision metrics
Float Approximate scientific or technical values

One common mistake is storing business counts as decimals because someone wants “flexibility.” That usually creates confusion. A count of products sold is not a ratio. Another mistake is using float for currency because it seems convenient during ETL. It is convenient until finance compares the cube to the source ledger and finds a cent-level mismatch that appears small but destroys trust. SSAS is only as accurate as the values you feed it, and Microsoft’s own SSAS guidance in Microsoft Learn makes clear that model semantics matter throughout processing and query behavior.

Choosing The Right Data Types For Dimension Keys And Attributes

Dimension keys should be stable, compact, and consistent across source systems and the cube. That usually points to integer surrogate keys rather than business-readable natural keys. A surrogate key gives you a controlled identifier that does not change when a business code is updated, a customer changes naming conventions, or a legacy system merges records. In SSAS, that stability is valuable because attribute relationships depend on reliable joins.

Natural keys still have a role, especially when they are guaranteed stable and truly meaningful to users. But they are often longer, more fragile, and more likely to contain formatting issues. A customer number with embedded dashes, a product code with leading zeros, or a composite business key made from multiple fields can work, but only if you standardize it carefully before it reaches the cube.

Keys, attributes, and user-facing labels

Keys and attributes do not serve the same purpose. A key exists to identify a row. A user-facing attribute exists to help someone browse, filter, or group data. That means a dimension can use a compact integer key internally while still exposing a string caption externally. This separation is one of the cleanest Data Modeling Standards you can enforce in SSAS Multidimensional.

String attributes are appropriate for names, categories, and descriptions, but they should be normalized. Trim spaces. Standardize case where needed. Remove hidden duplicates caused by leading or trailing whitespace. If a “North” member sometimes arrives as “North ”, you will eventually spend time chasing phantom duplicates in the dimension browser.

  • Use surrogate integer keys for stability and performance.
  • Keep natural keys if they are business-critical and truly stable.
  • Use string attributes for display and grouping, not identity.
  • Avoid GUIDs unless a system constraint forces them.

Date keys deserve special handling. Some teams use integer date keys like 20250411. That can work if the model is consistent and the date dimension is built for it. GUIDs are usually poor dimension keys because they are large, hard to read, and not useful for ordering or human debugging. For cube design guidance and official model behavior, keep the vendor reference close. Microsoft Learn is the best source for SSAS semantics, and it is much more useful than guessing how the engine will resolve attribute joins.

Handling Dates, Times, And Fiscal Calendars

Dates are one of the easiest places to break a cube. Store them as proper date/time values in the source whenever possible. If your source system only gives you strings or integer keys, convert them early in ETL so the DSV and cube can work with a clean, consistent representation. That helps SSAS recognize order, support hierarchy rollups, and calculate time intelligence correctly.

Time intelligence depends on type consistency. A calendar hierarchy needs a predictable year, quarter, month, and day structure. A fiscal calendar needs the same discipline, plus a deliberate mapping from fiscal periods to source dates. If the date key is inconsistent or the source format shifts by locale, you risk broken relationships and confusing drilldowns. The model may still process, but users will lose confidence the first time a month sorts incorrectly.

Role-playing dimensions and multi-calendar scenarios

Many cubes use role-playing dimensions such as Order Date, Ship Date, and Invoice Date. These are all references to the same underlying date dimension, but each role requires the correct relationship and business meaning. If one role is typed differently from another, queries may behave inconsistently or produce different rollups for similar facts. That is why date key design should be uniform from source to cube.

Locale problems are especially common. A date string like 04/05/2025 means one thing in U.S. formatting and another in many international formats. If you parse it too late, or let the server infer the meaning, the cube can silently create the wrong date member. That is a classic enterprise BI failure because it is subtle and not always caught during testing.

Dates are not just values. In SSAS, they are the backbone of hierarchy behavior, period comparisons, and most executive dashboards.

For calendar and date-related best practices, it is worth aligning your model with standard dimensional design conventions and validating them against the relational source before processing. The same discipline applies whether you are building a calendar cube for finance or a fiscal reporting model for operations. Clear types reduce ambiguity, and that matters more than clever shortcuts.

String Data Types, Collations, And Text Attributes

Text attributes are useful, but they are easy to misuse. They work well for labels, descriptions, status names, category captions, and friendly display values. They are poor choices for facts, keys, and analytical calculations. If you store a number or date as a string just to “make it easy,” you are usually creating a hidden maintenance problem that will show up later in sorting, filtering, or aggregation.

Collation matters because SSAS and the relational source must agree on comparison behavior. Case sensitivity, accent sensitivity, and locale-specific sort rules can all change how members are grouped and displayed. A model that sorts names one way in SQL Server may sort them another way in the cube if collation settings are not aligned. That matters for user trust, especially in multilingual environments.

Standardize text before it reaches the cube

Text should usually be trimmed, standardized, and cleansed in ETL. Remove leading and trailing spaces. Normalize inconsistent casing when business rules allow it. Replace control characters and malformed Unicode. If you let dirty text into the cube, you are asking SSAS to solve a data quality problem it was never designed to own.

Long free-form text is usually not suitable for dimension attributes. It consumes more memory, slows browsing, and rarely helps analysis. Users do not want to group by notes, comments, or unstructured descriptions. They want stable categories and meaningful labels. Keep long text in the relational layer or a detail report source if it is needed for drillthrough.

  • Use text attributes for labels and captions.
  • Avoid free-form text as a cube dimension attribute.
  • Trim and standardize before loading.
  • Align collation across source, DSV, and cube design.

For standards on text handling and analytics design, official vendor documentation and general database collation guidance are the safest references. If your environment includes reporting against customer-facing or regulated data, poor text hygiene can also create compliance problems when names, locations, or codes are misclassified. That is another reason to treat text data as part of modeling discipline, not cosmetic cleanup.

Nulls, Unknowns, And Default Values

Null handling is one of the most important parts of cube design because it affects both accuracy and usability. In SSAS, null can mean “missing,” “unknown,” “not applicable,” or “not collected,” depending on the field. Those meanings are not interchangeable. If you flatten them into a single placeholder without thought, your totals and distributions can become misleading.

Measures and attributes behave differently with nulls. A null measure may indicate no transaction occurred, while a null attribute may mean the dimension member could not be mapped. The cube designer needs to decide whether to preserve nulls, map them to an unknown member, or substitute a controlled default value. That choice should follow business meaning, not convenience.

Unknown members and placeholders

The unknown member is useful when source data contains orphaned facts or unmatched dimension keys. It allows the cube to remain processable and prevents a single bad record from breaking the entire load. But you should not use unknown members as a dumping ground for all exceptions. If everything ends up unknown, the cube is hiding data quality problems instead of exposing them.

Placeholder values like “N/A,” “Other,” or zero can be appropriate when they are intentionally modeled. They are dangerous when they are used to make processing succeed. A zero quantity is not the same as a missing quantity. A null shipping date is not the same as an order not yet shipped. Those distinctions matter in reports and calculations.

Warning

Do not replace every null with zero. In reporting, that can turn missing facts into fake facts and distort averages, ratios, and exception analysis.

If you need a simple rule, use this: missing means the value was not provided, not applicable means the concept does not apply, and zero means the measure was known and truly equal to none. That distinction protects the quality of totals and averages, especially in finance and operations cubes.

Data Type Conversion In The Data Source View And ETL Layer

The cleanest place to solve type issues is usually ETL, not the cube. That is where you can standardize formats, enforce business rules, and reject bad data before it pollutes the semantic layer. If a date is stored as text in the source, convert it to a proper date type in staging or ETL. If a number arrives with currency symbols, parse it before it reaches the cube. The earlier the fix, the fewer surprises later.

CAST and CONVERT are useful, but they should be part of a deliberate transformation strategy, not a patch for bad source design. Derived columns can help standardize data in the DSV or relational view, especially when source systems cannot be changed quickly. Still, if you keep adding cube-side workarounds, you end up with a model that is hard to debug and harder to maintain.

Where conversion belongs

  1. Source system: best when the system can be corrected upstream.
  2. Staging or ETL: best for standardized cleansing and type enforcement.
  3. Data Source View: acceptable for lightweight presentation or compatibility fixes.
  4. Cube layer: last resort, not first choice.

Validation should happen before deployment. Check for invalid dates, failed numeric conversions, truncation risks, and hidden locale assumptions. If a source column occasionally contains “TBD” in a numeric field, catch it in ETL and quarantine it. Do not hope the cube will ignore it. That hope usually turns into a processing error at the worst possible time.

Note

A predictable SSAS cube usually comes from predictable ETL. Standardization upstream beats troubleshooting semantic errors after processing.

For engine behavior and model authoring guidance, refer back to Microsoft Learn. For broader data governance thinking, NIST-aligned validation practices also support the idea that data should be verified before it becomes a decision input. That principle is just as true in BI as it is in security and operations.

Aggregation Design, Processing, And Storage Considerations

Data types affect more than query correctness. They influence how SSAS builds aggregations, how much space a cube consumes, and how stable processing runs under load. A compact integer key is easier for the engine to compress and partition than a long text value. A high-cardinality attribute like a GUID or free-form identifier can bloat indexes and reduce the usefulness of aggregations. That is especially noticeable when users slice and dice large fact tables by many dimensions.

Precision and scale also matter. A measure with excessive decimal places may increase storage cost without any business value. A float may save space but create precision drift. The right balance is not about using the “largest safe type.” It is about using the smallest type that still accurately reflects the business fact.

Tradeoffs in performance and maintainability

There is always a tension between model purity and operational convenience. A perfectly normalized model with ideal keys may require more ETL work. A quick fix may get a cube into production faster but create hidden maintenance cost later. In enterprise BI, the best decision is usually the one that minimizes both report risk and long-term remediation work.

High-cardinality text and GUID-like values also reduce cache efficiency because the engine has more distinct values to store and resolve. That can slow drilldowns and make attribute browsing less responsive. Numeric precision that is higher than necessary can also increase the processing burden without improving analytics. The result is a cube that looks fine in design review but behaves poorly under real user activity.

Design choice Practical effect
Compact integer keys Better compression and faster joins
GUID-like identifiers Higher storage and slower browsing

For standards-driven environments, it helps to think like a data engineer and a BI developer at the same time. The governing logic behind Data Types is the same: reduce ambiguity, reduce cardinality where possible, and keep the model aligned with the meaning of the business data. Official platform guidance from Microsoft remains the best source for SSAS behavior and processing expectations.

Common Data Type Mistakes To Avoid

The most common mistakes are also the most expensive. Teams store numeric facts as strings, dates as integers without a date dimension strategy, and descriptive values in places where keys should live. Then they try to compensate with calculated members, expressions, or report-side formatting. That approach delays the problem but does not solve it.

Another frequent issue is inconsistency across layers. The source table uses one type, the view uses another, and the cube exposes a third. That makes debugging difficult because each layer can appear “correct” in isolation while the end-to-end model is wrong. If you want a reliable cube, establish a single, documented type strategy and apply it consistently.

Practical mistakes that show up in production

  • Using strings for measures and converting them later in calculations.
  • Mixing collations across source systems and the cube.
  • Choosing big types by default when smaller ones are enough.
  • Ignoring timezone or locale rules for date and time values.
  • Using placeholder values that mask missing data.

Timezone issues are especially common when operational systems collect timestamps from multiple regions. A local timestamp without timezone context may look valid but still sort incorrectly across records. That can distort period analysis or lead to confusing “late” and “early” classifications. Likewise, locale-dependent parsing can turn an apparently simple date import into a silent data integrity issue.

If a cube requires constant compensating logic, the type design is probably wrong. Fix the model, not the report.

For validation and risk-reduction perspectives, it is worth cross-checking type decisions against data governance standards and platform documentation. NIST guidance and Microsoft’s official SSAS documentation both support disciplined handling of type and structure. That combination is what keeps cube issues from turning into production incidents.

Testing And Validation Best Practices

Testing a cube means more than checking whether it processes. You need to verify sorting, filtering, totals, and drilldowns using representative sample queries. A cube can process successfully and still be wrong in ways that only show up when a report user filters by month, expands a hierarchy, or compares two role-playing dates. That is why validation should be part of every type-related change.

Start by reconciling cube results against the source system. Pick a few critical measures and run the same filters in SQL and SSAS. If totals do not match, isolate whether the problem is in ETL conversion, key resolution, attribute relationships, or aggregation design. Test for null explosions, truncation, and unexpected member splits caused by hidden whitespace or collation differences.

A practical validation checklist

  1. Query sample totals by day, month, and year.
  2. Test filters on numeric, text, and date attributes.
  3. Compare source and cube totals after processing.
  4. Drill into outliers to see where type mapping failed.
  5. Check member lists for duplicates caused by formatting issues.

Document the expected type, format, and business meaning for every important field. That sounds basic, but it is one of the fastest ways to prevent future errors when the source schema changes or a new developer joins the project. If a field is supposed to be a date key, say so. If it is a surrogate identifier, say that too. Ambiguity invites inconsistency.

Key Takeaway

Validation is not a final step. It is part of the design process for every measure, key, and attribute that affects the cube.

For benchmarking and query verification, you can align your process with standard BI testing discipline and platform guidance from Microsoft Learn. That keeps cube behavior grounded in documented engine behavior rather than assumptions.

Design Guidelines And Practical Decision Framework

When deciding on a data type, start with business meaning. Ask what the value represents, how precise it must be, how often it changes, and how users will consume it. A value that drives a financial total needs a different treatment than a free-text label. A stable dimension key needs a different treatment than a display caption. This is where good Data Modeling Standards prevent unnecessary rework.

A practical rule set helps. Use compact numeric types whenever the business meaning supports it. Use fixed precision for money and counted values. Use dates for calendar logic, not strings. Keep text for labels and descriptions only. Handle nulls deliberately, not casually. If there is any doubt, push standardization into ETL and keep the cube semantic layer as clean as possible.

A simple decision framework

  1. Define the business meaning of the field.
  2. Determine precision and whether rounding is acceptable.
  3. Assess cardinality and storage impact.
  4. Decide consumption needs for reporting and browsing.
  5. Validate end-to-end from source to cube.

Before deployment, review each measure group, dimension key, and attribute with a checklist. Confirm type consistency, collation alignment, timezone handling, and null strategy. If a source system is messy, decide whether the fix belongs in the source, ETL, or DSV. In most enterprise environments, the best answer is to clean upstream data where possible and keep cube-side transformations minimal.

For business-wide governance and workforce alignment, standards bodies and professional guidance can be useful references. NIST supports disciplined data handling, and Microsoft documents how SSAS expects data to behave. If you need the course context, the SSAS : Microsoft SQL Server Analysis Services course fits directly here because it teaches how the engine consumes both multidimensional and tabular structures, including the design choices that affect semantic correctness and performance.

Featured Product

SSAS : Microsoft SQL Server Analysis Services

Learn how to build reliable BI models with Microsoft SQL Server Analysis Services to create consistent, governed measures and semantic layers for accurate insights

View Course →

Conclusion

Data Types are a core part of SSAS Multidimensional cube design. They determine whether measures aggregate correctly, whether dimensions sort properly, whether dates support time intelligence, and whether the cube remains fast enough to trust under real workload. If types are wrong, the cube may still process, but the analytics become fragile.

The best practices are straightforward: use the right numeric type for the business meaning, treat keys as stable and compact, store dates as real dates whenever possible, standardize strings before loading, and handle nulls with intention. Most importantly, do the conversion and validation in ETL first. That approach makes the cube more predictable and reduces downstream surprises.

If you are building or maintaining cubes, review your current type strategy field by field. Check whether each value is actually being stored in the most accurate and efficient way. Then validate the results against the source system and document the decisions. That discipline pays off quickly in fewer processing errors, cleaner reports, and better user trust in analytics.

Thoughtful type design reduces cube issues and improves confidence in every number your SSAS model returns.

CompTIA® and Microsoft® are trademarks of their respective owners.

[ FAQ ]

Frequently Asked Questions.

Why are data types crucial in SSAS multidimensional cubes?

Data types in SSAS multidimensional cubes are essential because they determine how data is stored, aggregated, and interpreted within the cube. Proper data typing ensures that measures are calculated accurately, and hierarchies behave as expected during analysis.

Incorrect data types can lead to miscalculations, such as summing text fields or misordering date hierarchies. For example, using a string data type for dates can cause sorting issues and prevent proper time-based analysis. Therefore, selecting appropriate data types is fundamental to maintaining data integrity and report accuracy.

What are some common mistakes caused by improper data types in SSAS cubes?

Common mistakes include incorrect totals, wrong sorting orders, and processing errors. For instance, if a date is stored as text, the date hierarchy may sort alphabetically rather than chronologically, leading to confusing reports.

Similarly, if measures are stored with incompatible data types, aggregation functions like sum or average may produce incorrect results. These issues often stem from overlooking the importance of choosing the correct data type during cube design, which can cause costly delays and inaccuracies in reporting.

How can I ensure data type consistency when designing SSAS multidimensional cubes?

To ensure consistency, always verify that the data types in your source data match the intended types in your cube design. Use data profiling tools to identify mismatched or problematic fields before processing.

During cube development, explicitly set data types for each dimension and measure. Additionally, implement data validation routines and test processing with sample data to catch issues early. Proper documentation and adherence to best practices help maintain data integrity over time.

What best practices should I follow regarding data types in SSAS multidimensional cubes?

Best practices include selecting the most precise data type for each field—such as integer for counts, decimal for monetary values, and date for time-related data. Avoid using text fields for numeric or date data to prevent sorting and aggregation problems.

It is also advisable to normalize data types across your data warehouse and cube design to reduce inconsistencies. Regularly review and test your cube processing to ensure data types are correctly applied and that measures and hierarchies function as intended for accurate analysis.

Can incorrect data types affect report performance in SSAS cubes?

Yes, incorrect data types can negatively impact report performance. Using inappropriate data types may cause additional processing overhead, inefficient aggregations, and longer query response times.

For example, storing numeric data as text forces SSAS to perform type conversions during calculations, increasing the load on the server. Proper data typing streamlines processing, improves query efficiency, and ensures that reports generate quickly and accurately, contributing to a better user experience.

Related Articles

Ready to start learning? Individual Plans →Team Plans →
Discover More, Learn More
CompTIA Storage+ : Best Practices for Data Storage and Management Discover essential best practices for data storage and management to enhance your… Best Practices for Ethical AI Data Privacy As artificial intelligence (AI) continues to transform industries, concerns about data privacy… Best Practices for Achieving Azure Data Scientist Certification Learn essential best practices to confidently achieve Azure Data Scientist certification by… PowerShell ForEach Loop: Best Practices for Handling Large Data Sets Discover best practices for using PowerShell ForEach loops to efficiently handle large… Securing ElasticSearch on AWS and Azure: Best Practices for Data Privacy and Access Control Discover best practices for securing Elasticsearch on AWS and Azure to protect… Best Practices for Data Privacy and Compliance in IoT-Enabled Embedded Systems Learn essential best practices to ensure data privacy and compliance in IoT-enabled…