Professional Context
Balancing the demands of model accuracy and computational efficiency is a daily tug-of-war for models practitioners, as they strive to optimize performance while meeting tight project deadlines and navigating the complexities of data quality and stakeholder expectations.
💡 Expert Advice & Considerations
It is incredibly dangerous to trust the AI for basic data cleaning; focus on higher-level tasks like model selection and hyperparameter tuning, where the AI can actually augment your expertise.
Advanced Prompt Library
4 Expert PromptsModel Selection for Classification Task
Given a dataset with 10 features and 3 target classes, and assuming a maximum model complexity of 1000 parameters, compare the performance of logistic regression, random forest, and support vector machine (SVM) models on a held-out test set, using metrics such as accuracy, precision, recall, and F1-score; provide a ranked list of models by performance, along with a discussion of the strengths and limitations of each approach.
Hyperparameter Tuning for Neural Network
For a neural network with 2 hidden layers and a maximum of 500 epochs, perform a grid search over the following hyperparameters: learning rate (0.01, 0.1, 1), batch size (32, 64, 128), and regularization strength (0.1, 1, 10); evaluate the model's performance on a validation set using mean squared error (MSE) and provide a heatmap of the results, highlighting the optimal hyperparameter combination.
Feature Importance Analysis for Regression Task
Using a dataset with 20 features and a continuous target variable, train a gradient boosting regressor model and compute the feature importance scores using the permutation importance method; provide a bar plot of the top 10 features by importance, along with a discussion of the relationships between the features and the target variable, and suggestions for feature engineering and selection.
Model Explainability for Black-Box Classifier
For a trained black-box classifier model (e.g. a neural network or ensemble method), generate a set of interpretable explanations using the SHAP (SHapley Additive exPlanations) method, focusing on a subset of 10 instances with high predicted probabilities; provide a plot of the SHAP values for each feature, along with a discussion of the insights gained into the model's decision-making process, and suggestions for model improvement and refinement.