The goal is to be able to define and solve statistical problems without having to spend much time to reference or relearn theory and code for every situation.
We want to have an inventory of code snippets and scripts that we can re-use based on the current project or issue, as well as, have all theory and general knowledge understood and easily accessible on the fly.
The practice approach is heavily based on brain based learning ideas like chunking, revising, engagement and coherence – as described in Teaching with the Brain in Mind by Eric Jensen.
We want to be able to regularly practice stats programming in an effective way without having to expend too much energy or stress (bcs we have a lot to do each day besides stats stuff). Stats is such a big topic, it’s easy to wear yourself out.
1. Review source materials and place topics and problems into relevant sub-topic sheets that are only relevant to that particular subject or topic, and most importantly, that make sense to you individually as to how you understand it.
2. Break topic sheets down into particular micro-questions and highlights that represent the “core intention” or value of the different parts of that topic. The objective is to be able to re-visit each question and work on it without having to review any background information. We want to create small chunks within each topic.
Make sure each question is completely self-contained, meaning that you don’t need to reference any other topic or question to solve that current question.
Also find other materials and examples independent of the original source material that describe and demonstrate the particular point and list them in the micro-question.
3. Re-visit each chunk’d question, answer or state the solution, and create code snippets in R that solve the problems. Alternate only trying to answer or state the objective of the theory and actually trying to do the coding. Skip around to different topics and questions – don’t require yourself to do it in a linear sequential fashion.
Do each of these steps in their own study session. For example if you have 1 hour to study – only spend that time reviewing source materials and organize topics into small study sheets, and don’t worry about coding or building sub-topics or micro-questions. In another session only focus on coding out a question solution that you’ve reviewed a couple times in the past and are already really familiar with. And so on… This way you won’t waste too much brain energy reloading background knowledge and details necessary for each study step, or trying to hold too much info in your head at once. You can stay in the groove for each activity which is more fun and encouraging as you complete each small item.
Topics to review and focus on separately so you can lighten the cognitive load, skip around, and more easily get the big picture when learning regression modeling….
Use Cases and Modeling Examples
– Understand the scenario
– What kind of data is involved
– What are the statistical characteristics of the data
– What is the problem or hypothesis to solve
– What type of model is used solve this problem
– How is the model applied to solve the problem
– How are answers to the hypotheses used in decision making
Model General Information
– Types of data model operates with
– What type(s) of output does model produce
– What questions can you answer with model
– How is it different from other models
Model Derivation and Assumptions Math
Fit Analysis Methods General Information
Fit Analysis Methods Derivation Math
Fit Analysis Application
Diagnostics Methods General Information
Diagnostics Methods Derivation Math
Estimation / Prediction Methods General Information
Estimation / Prediction Methods Derivation Math
Estimation / Prediction Application
Learn enough to know what questions to ask, then use those questions to deepen your understanding
Skim and Prime information on a project or study curriculum as much as possible to give your brain previews of the all the elements so that you can form a big picture of the way everything works together and loose working knowledge of what you’re doing.
This adds more neural connections to all of the new information you learn or use which increases the speed in which you absorb and process it.
Break up tasks and concepts into atomic standalone tasks for you to do later. The key is that each atomic tasks doesn’t require you to remember the other parts of the information or systems that you’re learning or executing the task for. Splitting into stand-alone tasks reduces the amount of new information and concepts you need to have actively “loaded” into your pre-frontal cortex at the time of working with or learning a new task.
This is important because the more info you have in your pre-frontal cortex the more energy it takes to sustain your thought about a task – which leads to more stress and burning out quicker. Once you’ve learned all the individual elements permanently then it’ll be “hard-wired” into your basil-ganglia and it’ll require much less energy (less stress) to use those concepts.
Executing small quick tasks are more fun and motivating.
*Teaching with the Brain in Mind by Eric Jensen
*Your Brain at Work by David Rock