Advanced use cases of network science
The tools of network science extend beyond social networks, spatial data, and temporal data. Deep learning models are ubiquitous in data science today, solving problems in computer vision, natural language processing, time series forecasting, and generative artificial intelligence. Large language models (LLMs) and text-to-image generators rely on a type of deep learning architecture called transformer models, feed-forward neural networks that find patterns in data by embedding input data, tuning attention weights, and decoding the data with respect to the outcome. These models can have billions or even trillions of parameters to tune across many connected layers. When combined with pre-trained contrastive language-image pre-training (CLIP) models, transformer models such as DALL-E can even generate realistic images based on text input. For instance, inputting hyperdetailed photorealistic king cobra, background desert market into NightCafe’...