Bite-sized actionable content, practical tutorials, and resources for Python programmers.#64ChatGPT Explains Shapiro-Wilk Test, Python VS Code Updates, and dtaianomaly for Time SeriesJoin Snyk's Fetch the Flag CTF 2025!Snykâs annual Capture the Flag event, Fetch the Flag 2025, is back! Hosted by Snyk and John Hammond, the event will take place on February 27, 2025, from 9am ET to 9pm ET. Engage in 20+ real-world security challenges, enhance your professional development & earn CPE credits.Register for free!Hi ,In todayâsExpert Insight we bring you an excerpt from the book, Coding with ChatGPT and Other LLMs, which discusses using LLMs like ChatGPT to explain and document code, specifically explaining the Shapiro-Wilk normality test code.News Highlights: Python in Visual Studio Code â February 2025 Release adds no-config debugging, test discovery cancellation, Native REPL, and AI code actions; CLI2 v4 introduces a simplified CLI, Sphinx integration, and 12-factor config; and Python docs expose risks in legacy web development with insecure CGI code.My top 5 picks from todayâs learning resources:Introduction to CUDA Programming for Python Developersđ„ïžFine Tuning a Python Code Completion Modelđ§Slashing my ".bashrc" in halfâĄ20 Important Statistical Approaches Every Data Scientist KnowsđPython does a super weird thing that puts methods on the wrong objectđ€And, in From the Cutting Edge, we introduce dtaianomaly, an open-source Python library that standardises and simplifies time series anomaly detection for both researchers and industry, offering a scikit-learn-inspired API, extensibility for developing new models, and tools for preprocessing, benchmarking, and visual analysis.Stay awesome!Divya Anne SelvarajEditor-in-ChiefSign Up|Advertiseđ Python in the Tech đ» Jungle đłđïžNewsPython in Visual Studio Code â February 2025 Release: This release of Python, Pylance, and Jupyter extensions introduces no-config debugging, test discovery cancellation, Native REPL launch, Go to Implementation with Pylance, and an experimental AI code action for generating symbols.Cli2Python Automation framework v4 released: The Python automation framework for DevOps includes features like a simplified CLI alternative, Sphinx integration for documentation, and a 12-factor configuration library.Python's official documentation contains textbook example of insecure code (XSS): The module with the insecure CGI code example is deprecated in Python 3.11 and removed in 3.13, but exposes risks in legacy Python web development.đŒCase Studies and ExperimentsđŹLines, Scribbles, and an Algorithm: AI in Sleep Analysis and Brain Mapping: Explores how Python and machine learning, using libraries like MNE and Random Forest Classifiers, can be applied to EEG data for classifying sleep stages and analyzing brain activity.Antivirus Bypass Method for Metasploit Tools: Details a method for bypassing antivirus detection using Metasploit tools, including creating a Python-based Meterpreter payload with msfvenom, obfuscating it through encoding and compression techniques, and configuring msfconsole for stealth and persistence.đAnalysisWhen Maps Made Me Dumb, Valuable Geospatial Insights Rescued Me: Explores how Pythonâs GeoPandas library, along with tools like QGIS, can help accurately measure geographic areas and understand the distortions caused by map projections like Mercator.Python does a super weird thing that puts methods on the wrong object: Explores why Python places the join method on strings instead of lists, examining its design, historical context, and alignment with Python's philosophy.đTutorials and Guidesđ€Introduction to CUDA Programming for Python Developers: Explains GPU architecture, parallel processing, and the CUDA platform for optimizing computations, especially for deep learning.Building an Open, Multi-Engine Data Lakehouse with S3 and Python: Explains how to set up a Python-first, open lakehouse on cloud storage, integrate multiple query engines, and handle metadata management using Iceberg and Snowflake's Open Catalog.Fine Tuning a Python Code Completion Model: Details the process of fine-tuning the smolLM 135M model for Python code completion using LoRA, with an iterative approach to optimize hyperparameters, achieving efficient training and strong code generation performance.Train a Small GPT-Style LLM from Scratch: Provides a Jupyter Notebook for training a small GPT-style, decoder-only language model from scratch using PyTorch, covering tokenization, positional encoding, self-attention, and more.Automating FrameBlender with AI Agents and LangGraph: Explains how to automate a previously interactive system with multi-agent workflows using LangGraph to improve scalability and efficiency.UV â An Ultimate Python Package Manager: The Beginner's Guide: Covers how to install and use UV for managing Python environments, packages, and projects efficiently, along with advanced features like multi-version support and isolated tool execution.Synthetic Data(What, Why, When, and How?): Defines synthetic data, its use cases in scenarios like healthcare and finance, and provides a Python code snippet to generate synthetic data using libraries like Faker, Pandas, and xlsxwriter.đBest Practices and AdviceđSlashing my ".bashrc" in half: Covers how the author improved their shell configuration by replacing numerous plugins with efficient, modern Rust-based tools, resulting in a faster, leaner, and easier-to-maintain ".bashrc."Benchmarking utility for Python: Details how Goâs built-in testing framework simplifies benchmarking, andintroduces a custom utility function, autobench, to replicate Go's automatic loop count and repetition handling in Python.10 machine learning mistakes and how to avoid them: Outlines issues like AI hallucinations, model bias, poor data quality, and integration challenges, and offers insights on how to avoid these.pytest snippets for python 3.11+: Provides useful snippets focusing on asynchronous test support, integration test handling, and SQLAlchemy async support with factory-boy.20 Important Statistical Approaches Every Data Scientist Knows: Covers methods like Bayesian Inference, Maximum Likelihood Estimation, and Principal Component Analysis, with Python code snippets to illustrate each technique.đFrom the Cutting Edge: dtaianomalyâ A Python library for time series anomaly detectionđ„In "dtaianomaly: A Python library for time series anomaly detection," Carpentier et al. introduce an open-source Python library for time series anomaly detection. The library aims to bridge the gap between cutting-edge research and practical applications in business and industry.ContextTime series anomaly detection (TSAD) involves identifying outliers or abnormal observations in time series data. Accurate anomaly detection is crucial in various fields, such as healthcare, finance, and IoT. However, many existing tools struggle to effectively integrate state-of-the-art research with practical industrial applications. dtaianomaly addresses this issue by simplifying anomaly detection for practitioners while providing a framework for researchers to extend the tool with new methods.Key concepts in this domain include anomaly detection, which refers to the identification of unusual patterns in data; time series data, which consists of data points indexed by time; and APIs, which offer a standard interface for communication between software systems.Key FeaturesStandardised API: Provides consistency for new users and compatibility with tools like scikit-learn.Extensibility: Designed to allow easy addition of new anomaly detection methods.Experimental Validation: Simplifies benchmarking, enabling comparison of models with minimal code.Range of Anomaly Detectors: Includes various built-in detectors, such as Matrix Profile.Time Series Preprocessing: Provides tools for scaling, resampling, and smoothing data.Visual Analysis Tools: Visualisation techniques for qualitative assessment of models.Confidence Prediction: Estimates the confidence level of anomaly scores.Profiling: Supports runtime and memory profiling, useful for evaluating edge-device suitability.Cross-Platform Unit Testing: Ensures robustness with thorough testing across platforms.What This Means for Youdtaianomaly is relevant to both researchers and industry practitioners. Researchers can use it to rapidly implement and test new algorithms, while businesses can apply its established tools for practical anomaly detection tasks, such as in sensor data or financial transactions. The library's user-friendly documentation and standardised API make it accessible to newcomers, while its extensibility makes it valuable for ongoing research.Examining the Detailsdtaianomaly uses an object-oriented design similar to scikit-learn. Its core component, the BaseDetector, standardises the interface for all anomaly detection models, making it easy to integrate new methods. The library supports time series preprocessing, preparing data before anomaly detection, and offers an automated workflow for large-scale experimental validation. The LazyDataLoader ensures efficient handling of large datasets, while the Pipeline object allows for chaining preprocessing and detection steps. Evaluation is facilitated through both quantitative metrics (e.g., AUC, precision, recall) and qualitative visualisations.The library also supports runtime and memory profiling, helping determine if a model is suitable for deployment on edge devices, which is particularly useful for resource-constrained industries. Its cross-platform compatibility and continuous testing ensure reliability.You can learn more by reading the entire paper or accessing the library on GitHub.đ§ Expert insightđ„Hereâs an excerpt from âChapter 3: Grounding Responsesâ in the book, Coding with ChatGPT and Other LLMs by Dr. Vincent Austin Hall.Documenting codeOften, youâll come across code that someone else (including an earlier version of you) has written and itâll not make sense to you. You might have to go through each line and function and class, running it in your mind and testing each bit of code inan IDE.Fortunately, there is now anotheroption: LLMs or chatbots. You can get one to be your personal code translator to clarify what is going on.You can put some code in the chat window and ask the bot to explain what it does. You will generally also get detailed explanations when the chatbots giveyou code.Letâs get ChatGPT and to explain some codeStatistical tests are useful for statisticians and data scientists, so, letâs do some. Jason Brownlee can help here [Brownlee_17Stats]. What is aShapiro-Wilk test?Here is some code in Python for this, from 17 Statistical Hypothesis Tests in Python (Cheat Sheet):# Example of the Shapiro-Wilk Normality Testfrom scipy.stats import shapirodata = [0.873, 2.817, 0.121, -0.945, -0.055, -1.436, 0.360, -1.478, -1.637, -1.869]stat, p = shapiro(data)print('stat=%.3f, p=%.3f' % (stat, p))if p > 0.05:print('Probably Gaussian')else:print('Probably not Gaussian')What does ChatGPT sayabout this?Here isthe prompt:Please explain this code to me, I don't understand it at all.Here is ChatGPTâsresponse:(I put it into a markdown interpreter to get it into a nice style, without the **around theheadings** [Dillinger].)Certainly! The provided code is an example of how to use the Shapiro-Wilk Normality Test in Python, specifically with thescipy.statslibrary. This statistical test is used to determine whether a given sample follows a Gaussian (normal) distribution. Hereâs a breakdown ofthe code:# Importing the Shapiro-Wilk test from the scipy.stats modulefrom scipy.stats import shapiro# Creating a sample dataset (in this case, a list of numbers)data = [0.873, 2.817, 0.121, -0.945, -0.055, -1.436, 0.360, -1.478, -1.637, -1.869]# Applying the Shapiro-Wilk test to the datasetstat, p = shapiro(data)# Printing the test statistic and the p-valueprint('stat=%.3f, p=%.3f' % (stat, p))# Checking the p-value and making a decision based on a significance level (e.g., 0.05)if p > 0.05: print('Probably Gaussian') # If p > 0.05, we fail to reject the null hypothesis, suggesting the data is probably Gaussian.else: print('Probably not Gaussian') # If p <= 0.05, we reject the null hypothesis, suggesting the data is probably not Gaussian.Hereâs astep-by-stepexplanation:Importing the Shapiro-Wilktest function:from scipy.stats import shapiroThis line imports theshapirofunction from thescipy.statsmodule.Defining asample dataset:data = [0.873, 2.817, 0.121, -0.945, -0.055, -1.436, 0.360, -1.478, -1.637, -1.869]This line creates a list named data containing a set ofnumerical values.Applying theShapiro-Wilk test:stat, p = shapiro(data)This line applies the Shapiro-Wilk test to the dataset, and the results are stored in two variables: stat (the test statistic) and p (the p-value).Printingthe results:print('stat=%.3f, p=%.3f' % (stat, p))This line prints the test statistic and the p-value with threedecimal places.Interpretingthe results:if p > 0.05: print('Probably Gaussian')else: print('Probably not Gaussian')Based on a significance level (0.05is commonly used), this code prints whether the dataset is probably Gaussian or probably not Gaussian. Ifp > 0.05, we fail to reject the null hypothesis, suggesting the data is probably Gaussian. Ifp <= 0.05, we reject the null hypothesis, suggesting the data is probablynot Gaussian.Here is my review: ChatGPT (not GPT-4, but definitely use that instead) tells us what the test is for, how to import the relevant library and where it comes from, how to create example data and what type it is, about the statistics returned, about printing, and what each statistics result means. Thatâsreally helpful!What could it do better? Maybe it could suggest other tests that might be able to do this, when to not use the test, and whether it might be able to critique the code. We can askthe following:ChatGPT, when should I not use thisstatistical test?ChatGPT lists six points and details that the test gets better with larger sample sizes, but there is a limit. It might also find significance where there isnât any, on very large sample sizes. ChatGPT also mentions non-Gaussian distributions and sensitivity to outliers and uses other tests and actions to confirm that the data is to benormally distributed.There are far more details, which I wonât get into here, for brevity, but Iâm sure if you ask the AI for it, itâll give you goodinformation [ChatGPT].We could also ask Gemini to critiquethe code:Critique the code and give us some visualizations to help us understand theShapiro-Wilk test.Letâs check how that compares with what Gemini says about the code...Coding with ChatGPT and Other LLMs was published in November 2024. Packt library subscribers can continue reading the entire book for free.Get the eBook for $35.99 $24.99Get the Print Book for $44.99And thatâs a wrap.We have an entire range of newsletters with focused content for tech pros. Subscribe to the ones you find the most usefulhere. The complete PythonPro archives can be foundhere.If you have any suggestions or feedback, or would like us to find you a Python learning resource on a particular subject, just respond to this email!*{box-sizing:border-box}body{margin:0;padding:0}a[x-apple-data-detectors]{color:inherit!important;text-decoration:inherit!important}#MessageViewBody a{color:inherit;text-decoration:none}p{line-height:inherit}.desktop_hide,.desktop_hide table{mso-hide:all;display:none;max-height:0;overflow:hidden}.image_block img+div{display:none}sub,sup{font-size:75%;line-height:0}#converted-body .list_block ol,#converted-body .list_block ul,.body [class~=x_list_block] ol,.body [class~=x_list_block] ul,u+.body .list_block ol,u+.body .list_block ul{padding-left:20px} @media (max-width: 100%;display:block}.mobile_hide{min-height:0;max-height:0;max-width: 100%;overflow:hidden;font-size:0}.desktop_hide,.desktop_hide table{display:table!important;max-height:none!important}}
Read more