5  Chapter 5: Store Data

Note

Early draft release: This chapter is not yet available.

This chapter covers the essential strategies for storing and managing data using PySpark in Microsoft Fabric. It explains how to save data to Lakehouse and Warehouse, discusses advanced save parameters like partitioning, and introduces the use of mssparkutils for efficient file operations. The chapter also highlights best practices for ensuring data integrity and provides practical examples to help you implement robust data storage solutions in Fabric.

5.1 Table of Contents

  1. Introduction
  2. Data Storage Options in Microsoft Fabric
    • Saving to Lakehouse Tables
    • Saving to Warehouse Tables
    • Saving as Files (Parquet, CSV, Delta)
  3. Advanced Save Parameters
    • Partitioning Strategies
    • Best Practices for Partitioning
  4. Using mssparkutils for File Operations
    • Copying Files and Directories
    • Writing and Appending Files
    • Directory Management
  5. Ensuring Data Integrity
    • Data Validation Before Saving
  6. Conclusion
    • Chapter Recap
  7. Exercises
  8. Further Reading

5.1.1 Writing Data to a Warehouse Table

The Spark connector supports writing data to a warehouse table using different save modes. Supported modes include errorifexists, ignore, overwrite, and append.

# Writing data to a warehouse table
warehouse_df.write.synapsesql("<warehouse/lakehouse name>.<schema name>.<table name>")

# Using specific save modes
warehouse_df.write.mode("overwrite").synapsesql("<warehouse/lakehouse name>.<schema name>.<table name>")
warehouse_df.write.mode("append").synapsesql("<warehouse/lakehouse name>.<schema name>.<table name>")
Back to top