Train Your Brain to Remember Anything You Learn With This Simple, 20-Minute Habit

Not too long ago, a colleague and I were lamenting the process of growing older and the inevitable increasing difficulty of remembering things we want to remember. That becomes particularly annoying when you attend a conference or a learning seminar and find yourself forgetting the entire session just days later.

But then my colleague told me about the Ebbinghaus Forgetting Curve, a 100-year-old formula developed by German psychologist Hermann Ebbinghaus, who pioneered the experimental study of memory. The psychologist’s work has resurfaced and has been making its way around college campuses as a tool to help students remember lecture material. For example, the University of Waterloo explains the curve and how to use it on the Campus Wellness website.

I teach at Indiana University and a student mentioned it to me in class as a study aid he uses. Intrigued, I tried it out too–more on that in a moment. The Forgetting Curve describes how we retain or lose information that we take in, using a one-hour lecture as the basis of the model. The curve is at its highest point (the most information retained) right after the one-hour lecture. One day after the lecture, if you’ve done nothing with the material, you’ll have lost between 50 and 80 percent of it from your memory.

By day seven, that erodes to about 10 percent retained, and by day 30, the information is virtually gone (only 2-3 percent retained). After this, without any intervention, you’ll likely need to relearn the material from scratch. Sounds about right from my experience. But here comes the amazing part–how easily you can train your brain to reverse the curve.


With just 20 minutes of work, you’ll retain almost all of what you learned.

This is possible through the practice of what’s called spaced intervals, where you revisit and reprocess the same material, but in a very specific pattern. Doing so means it takes you less and less time to retrieve the information from your long-term memory when you need it. Here’s where the 20 minutes and very specifically spaced intervals come in.

Ebbinghaus’s formula calls for you to spend 10 minutes reviewing the material within 24 hours of having received it (that will raise the curve back up to almost 100 percent retained again). Seven days later, spend five minutes to “reactivate” the same material and raise the curve up again. By day 30, your brain needs only two to four minutes to completely “reactivate” the same material, again raising the curve back up.

Thus, a total of 20 minutes invested in review at specific intervals and, voila, a month later you have fantastic retention of that interesting seminar. After that, monthly brush-ups of just a few minutes will help you keep the material fresh.


Here’s what happened when I tried it.

I put the specific formula to the test. I keynoted at a conference and was also able to take in two other one-hour keynotes at the conference. For one of the keynotes, I took no notes, and sure enough, just shy of a month later I can barely remember any of it.

For the second keynote, I took copious notes and followed the spaced interval formula. A month later, by golly, I remember virtually all of the material. And in case if you’re wondering, both talks were equally interesting to me–the difference was the reversal of Ebbinghaus’ Forgetting Curve.

So the bottom line here is if you want to remember what you learned from an interesting seminar or session, don’t take a “cram for the exam” approach when you want to use the info. That might have worked in college (although Waterloo University specifically advises against cramming, encouraging students to follow the aforementioned approach). Instead, invest the 20 minutes (in spaced-out intervals), so that a month later it’s all still there in the old noggin. Now that approach is really using your head.

Science has proven that reading can enhance your cognitive function, develop your language skills, and increase your attention span. Plus, not only does the act of reading train your brain for success, but you’ll also learn new things! The founder of Microsoft, Bill Gates, said, “Reading is still the main way that I both learn new things and test my understanding.”

By: Scott Mautz

Source: Pocket

.

Critics:

Dr. John N. Morris is the director of social and health policy research at the Harvard-affiliated Institute for Aging Research. He believes there are three main guidelines you should follow when training your mind:

  1. Do Something Challenging: Whatever you do to train your brain, it should be challenging and take you beyond your comfort zone.
  2. Choose Complex Activities: Good brain training exercises should require you to practice complex thought processes, such as creative thinking and problem-solving.
  3. Practice Consistently: You know the saying: practice makes perfect! Dr. Morris says, “You can’t improve memory if you don’t work at it. The more time you devote to engaging your brain, the more it benefits.”
  4. If you’re looking for reading material, check out our guides covering 40 must-read books and the best books for entrepreneurs.
  5. Practice self-awareness. Whenever you feel low, check-in with yourself and try to identify the negative thought-loop at play. Perhaps you’re thinking something like, “who cares,” “I’ll never get this right,” “this won’t work,” or “what’s the point?” 
  6. Science has shown that mindfulness meditation helps engage new neural pathways in the brain. These pathways can improve self-observational skills and mental flexibility – two attributes that are crucial for success. What’s more, another study found that “brief, daily meditation enhances attention, memory, mood, and emotional regulation in non-experienced meditators.”
  7. Brain Age Concentration Training is a brain training and mental fitness system for the Nintendo 3DS system.
  8. Queendom has thousands of personality tests and surveys. It also has an extensive collection of “brain tools”—including logic, verbal, spatial, and math puzzles; trivia quizzes; and aptitude tests
  9. Claiming to have the world’s largest collection of brain teasers, Braingle’s free website provides more than 15,000 puzzles, games, and other brain teasers as well as an online community of enthusiasts.

 

6 Math Foundations to Start Learning Machine Learning

As a Data Scientist, machine learning is our arsenal to do our job. I am pretty sure in this modern times, everyone who is employed as a Data Scientist would use machine learning to analyze their data to produce valuable patterns. Although, why we need to learn math for machine learning? There is some argument I could give, this includes:

  • Math helps you select the correct machine learning algorithm. Understanding math gives you insight into how the model works, including choosing the right model parameter and the validation strategies.
  • Estimating how confident we are with the model result by producing the right confidence interval and uncertainty measurements needs an understanding of math.
  • The right model would consider many aspects such as metrics, training time, model complexity, number of parameters, and number of features which need math to understand all of these aspects.
  • You could develop a customized model that fits your own problem by knowing the machine learning model’s math.

The main problem is what math subject you need to understand machine learning? Math is a vast field, after all. That is why in this article, I want to outline the math subject you need for machine learning and a few important point to starting learning those subjects.

Machine Learning Math

We could learn many topics from the math subject, but if we want to focus on the math used in machine learning, we need to specify it. In this case, I like to use the necessary math references explained in the Machine Learning Math book by M. P. Deisenroth, A. A. Faisal, and C. S. Ong, 2021.

In their book, there are math foundations that are important for Machine Learning. The math subject is:

Image created by Author

Six math subjects become the foundation for machine learning. Each subject is intertwined to develop our machine learning model and reach the “best” model for generalizing the dataset.

Let’s dive deeper for each subject to know what they are.

Linear Algebra

What is Linear Algebra? This is a branch of mathematic that concerns the study of the vectors and certain rules to manipulate the vector. When we are formalizing intuitive concepts, the common approach is to construct a set of objects (symbols) and a set of rules to manipulate these objects. This is what we knew as algebra.

If we talk about Linear Algebra in machine learning, it is defined as the part of mathematics that uses vector space and matrices to represent linear equations.

When talking about vectors, people might flashback to their high school study regarding the vector with direction, just like the image below.

Geometric Vector (Image by Author)

This is a vector, but not the kind of vector discussed in the Linear Algebra for Machine Learning. Instead, it would be this image below we would talk about.

Vector 4×1 Matrix (Image by Author)

What we had above is also a Vector, but another kind of vector. You might be familiar with matrix form (the image below). The vector is a matrix with only 1 column, which is known as a column vector. In other words, we can think of a matrix as a group of column vectors or row vectors. In summary, vectors are special objects that can be added together and multiplied by scalars to produce another object of the same kind. We could have various objects called vectors.

Matrix (Image by Author)

Linear algebra itself s a systematic representation of data that computers can understand, and all the operations in linear algebra are systematic rules. That is why in modern time machine learning, Linear algebra is an important study.

An example of how linear algebra is used is in the linear equation. Linear algebra is a tool used in the Linear Equation because so many problems could be presented systematically in a Linear way. The typical Linear equation is presented in the form below.

Linear Equation (Image by Author)

To solve the linear equation problem above, we use Linear Algebra to present the linear equation in a systematical representation. This way, we could use the matrix characterization to look for the most optimal solution.

Linear Equation in Matrix Representation (Image by Author)

To summary the Linear Algebra subject, there are three terms you might want to learn more as a starting point within this subject:

  • Vector
  • Matrix
  • Linear Equation

Analytic Geometry (Coordinate Geometry)

Analytic geometry is a study in which we learn the data (point) position using an ordered pair of coordinates. This study is concerned with defining and representing geometrical shapes numerically and extracting numerical information from the shapes numerical definitions and representations. We project the data into the plane in a simpler term, and we receive numerical information from there.

Cartesian Coordinate (Image by Author)

Above is an example of how we acquired information from the data point by projecting the dataset into the plane. How we acquire the information from this representation is the heart of Analytical Geometry. To help you start learning this subject, here are some important terms you might need.

  • Distance Function

A distance function is a function that provides numerical information for the distance between the elements of a set. If the distance is zero, then elements are equivalent. Else, they are different from each other.

An example of the distance function is Euclidean Distance which calculates the linear distance between two data points.

Euclidean Distance Equation (Image by Author)
  • Inner Product

The inner product is a concept that introduces intuitive geometrical concepts, such as the length of a vector and the angle or distance between two vectors. It is often denoted as ⟨x,y⟩ (or occasionally (x,y) or ⟨x|y⟩).

Matrix Decomposition

Matrix Decomposition is a study that concerning the way to reducing a matrix into its constituent parts. Matrix Decomposition aims to simplify more complex matrix operations on the decomposed matrix rather than on its original matrix.

A common analogy for matrix decomposition is like factoring numbers, such as factoring 8 into 2 x 4. This is why matrix decomposition is synonymical to matrix factorization. There are many ways to decompose a matrix, so there is a range of different matrix decomposition techniques. An example is the LU Decomposition in the image below.

LU Decomposition (Image by Author)

Vector Calculus

Calculus is a mathematical study that concern with continuous change, which mainly consists of functions and limits. Vector calculus itself is concerned with the differentiation and integration of the vector fields. Vector Calculus is often called multivariate calculus, although it has a slightly different study case. Multivariate calculus deals with calculus application functions of the multiple independent variables.

There are a few important terms I feel people need to know when starting learning the Vector Calculus, they are:

  • Derivative and Differentiation

The derivative is a function of real numbers that measure the change of the function value (output value) concerning a change in its argument (input value). Differentiation is the action of computing a derivative.

Derivative Equation (Image by Author)
  • Partial Derivative

The partial derivative is a derivative function where several variables are calculated within the derivative function with respect to one of those variables could be varied, and the other variable are held constant (as opposed to the total derivative, in which all variables are allowed to vary).

  • Gradient

The gradient is a word related to the derivative or the rate of change of a function; you might consider that gradient is a fancy word for derivative. The term gradient is typically used for functions with several inputs and a single output (scalar). The gradient has a direction to move from their current location, e.g., up, down, right, left.

Probability and Distribution

Probability is a study of uncertainty (loosely terms). The probability here can be thought of as a time where the event occurs or the degree of belief about an event’s occurrence. The probability distribution is a function that measures the probability of a particular outcome (or probability set of outcomes) that would occur associated with the random variable. The common probability distribution function is shown in the image below.

Normal Distribution Probability Function (Image by Author)

Probability theory and statistics are often associated with a similar thing, but they concern different aspects of uncertainty:

•In math, we define probability as a model of some process where random variables capture the underlying uncertainty, and we use the rules of probability to summarize what happens.

•In statistics, we try to figure out the underlying process observe of something that has happened and tries to explain the observations.

When we talk about machine learning, it is close to statistics because its goal is to construct a model that adequately represents the process that generated the data.

Optimization

In the learning objective, training a machine learning model is all about finding a good set of parameters. What we consider “good” is determined by the objective function or the probabilistic models. This is what optimization algorithms are for; given an objective function, we try to find the best value.

Commonly, objective functions in machine learning are trying to minimize the function. It means the best value is the minimum value. Intuitively, if we try to find the best value, it would like finding the valleys of the objective function where the gradients point us uphill. That is why we want to move downhill (opposite to the gradient) and hope to find the lowest (deepest) point. This is the concept of gradient descent.

Gradient Descent (Image by Author)

There are few terms as a starting point when learning optimization. They are:

  • Local Minima and Global Minima

The point at which a function best values takes the minimum value is called the global minima. However, when the goal is to minimize the function and solved it using optimization algorithms such as gradient descent, the function could have a minimum value at different points. Those several points which appear to be minima but are not the point where the function actually takes the minimum value are called local minima.

Local and Global Minima (Image by Author)
  • Unconstrained Optimization and Constrained Optimization

Unconstrained Optimization is an optimization function where we find a minimum of a function under the assumption that the parameters can take any possible value (no parameter limitation). Constrained Optimization simply limits the possible value by introducing a set of constraints.

Gradient descent is an Unconstrained optimization if there is no parameter limitation. If we set some limit, for example, x > 1, it is an unconstrained optimization.

Conclusion

Machine Learning is an everyday tool that Data scientists use to obtain the valuable pattern we need. Learning the math behind machine learning could provide you an edge in your work. There are many math subjects out there, but there are 6 subjects that matter the most when we are starting learning machine learning math, and that is:

  • Linear Algebra
  • Analytic Geometry
  • Matrix Decomposition
  • Vector Calculus
  • Probability and Distribution
  • Optimization

If you start learning math for machine learning, you could read my other article to avoid the study pitfall. I also provide the math material you might want to check out in that article.

 

By: Cornellius Yudha Wijaya

Source: 6 Math Foundations to Start Learning Machine Learning | by Cornellius Yudha Wijaya | Towards Data Science

.

Critics:

Machine learning (ML) is the study of computer algorithms that improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as “training data“, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks.

A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. In its application across business problems, machine learning is also referred to as predictive analytics.

Machine learning approaches are traditionally divided into three broad categories, depending on the nature of the “signal” or “feedback” available to the learning system:

  • Supervised learning: The computer is presented with example inputs and their desired outputs, given by a “teacher”, and the goal is to learn a general rule that maps inputs to outputs.
  • Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning).
  • Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that’s analogous to rewards, which it tries to maximize.

References

Neuroscience and a Dose of Emotional Intelligence Reveal a Simple Trick to Learn More With Less Effort

Neuroscience and a Dose of Emotional Intelligence Reveal a Simple Trick to Learn More With Less Effort

A producer for a television business show called and asked if I was available. He described the theme of the segment and asked if I had any ideas. I offered some possibilities.

“That sounds great,” he said. “We’re live in 30 minutes. And I need you to say exactly what you just said.”

“Ugh,” I thought. I’m not great at repeating exactly what I just said. So I started rehearsing.

Ten minutes later, he called to talk about a series he was developing. I almost asked him if we could postpone that conversation so I could use the time to keep rehearsing, but I figured since I had already run through what I would say two times, I would be fine.

Unfortunately, I was right. I was fine. Not outstanding. Not exceptional. Just … fine. My transitions were weak. My conclusion was more like a whimper than a mic drop. And I totally forgot one of the major points I wanted to make.

Which, according to Hermann Ebbinghaus, the pioneer of quantitative memory research, should have come as no surprise.

Ebbinghaus is best known for two major findings: the forgetting curve and the learning curve.

The forgetting curve describes how new information fades away. Once you’ve “learned” something new, the fastest drop occurs in just 20 minutes; after a day, the curve levels off.

Wikimedia Commons inline image

Wikimedia Commons

Yep: Within minutes, nearly half of what you’ve “learned” has disappeared.

Or not.

According to Benedict Carey, author of How We Learn, what we learn doesn’t necessarily fade; it just becomes less accessible. In my case, I hadn’t forgotten a key point; otherwise I wouldn’t have realized, minutes after, that I left it out. I just didn’t access that information when I needed it.

Ebbinghaus would have agreed with Carey: He determined that even when we think we’ve forgotten something, some portion of what we learned is still filed away.

Which makes the process of relearning a lot more efficient.

Suppose that the poem is again learned by heart. It then becomes evident that, although to all appearances totally forgotten, it still in a certain sense exists and in a way to be effective. The second learning requires noticeably less time or a noticeably smaller number of repetitions than the first. It also requires less time or repetitions than would now be necessary to learn a similar poem of the same length.

That, in a nutshell, is the power of spaced repetition.

Courtesy curiosity.com inline image

Courtesy curiosity.com

The premise is simple. Learn something new, and within a short period of time you’ll forget much of it. Repeat a learning session a day later, and you’ll remember more.

Repeat a session two days after that, and you’ll remember even more. The key is to steadily increase the time intervals between relearning sessions.

And — and this is important — to make your emotions work for you, not against you, forgive yourself for forgetting. To accept that forgetting — to accept that feeling like you aren’t making much progress — is actually a key to the process.

Why?

  • Forgetting is an integral part of learning. Relearning reinforces earlier memories. Relearning creates different context and connections. According to Carey, “Some ‘breakdown’ must occur for us to strengthen learning when we revisit the material. Without a little forgetting, you get no benefit from further study. It is what allows learning to build, like an exercised muscle.”
  • The process of retrieving a memory — especially when you fail — reinforces access. That’s why the best way to study isn’t to reread; the best way to study is to quiz yourself. If you test yourself and answer incorrectly, not only are you more likely to remember the right answer after you look it up, you’ll also remember that you didn’t remember. (Getting something wrong is a great way to remember it the next time, especially if you tend to be hard on yourself.)
  • Forgetting, and therefore repeating information, makes your brain assign that information greater importance. Hey: Your brain isn’t stupid.

So what should I have done?

While I didn’t have days to prepare, still. I could have run through my remarks once, taken a five-minute break, and then done it again.

Even after five minutes, I would have forgotten some of what I planned to say. Forgetting and relearning would have reinforced my memory since, in effect, I would have quizzed myself.

Then I could have taken another five-minute break, repeated the process, and then reviewed my notes briefly before we went live.

And I should have asserted myself and asked the producer if we could talk about the series he was developing later.

Because where learning is concerned, time is everything. Not large blocks of time, though. Not hours-long study sessions. Not sitting for hours, endlessly reading and rereading or practicing and repracticing.

Nope: time to forget and then relearn. Time to lose, and then reinforce, access. Time to let memories and connections decay and become disorganized and then tidy them back up again. Because information is only power if it’s useful. And we can’t use what we don’t remember.

Source: Neuroscience and a Dose of Emotional Intelligence Reveal a Simple Trick to Learn More With Less Effort | Inc.com

.

Critics:

Learning is the process of acquiring new understanding, knowledge, behaviors, skills, values, attitudes, and preferences. The ability to learn is possessed by humans, animals, and some machines; there is also evidence for some kind of learning in certain plants. Some learning is immediate, induced by a single event (e.g. being burned by a hot stove), but much skill and knowledge accumulate from repeated experiences. The changes induced by learning often last a lifetime, and it is hard to distinguish learned material that seems to be “lost” from that which cannot be retrieved.

Human learning starts at birth (it might even start before) and continues until death as a consequence of ongoing interactions between people and their environment. The nature and processes involved in learning are studied in many fields, including educational psychology, neuropsychology, experimental psychology, and pedagogy. Research in such fields has led to the identification of various sorts of learning.

For example, learning may occur as a result of habituation, or classical conditioning, operant conditioning or as a result of more complex activities such as play, seen only in relatively intelligent animals. Learning may occur consciously or without conscious awareness. Learning that an aversive event can’t be avoided nor escaped may result in a condition called learned helplessness.

There is evidence for human behavioral learning prenatally, in which habituation has been observed as early as 32 weeks into gestation, indicating that the central nervous system is sufficiently developed and primed for learning and memory to occur very early on in development.

References

Why Your Workforce Needs Data Literacy

Organizations that rely on data analysis to make decisions have a significant competitive advantage in overcoming challenges and planning for the future. And yet data access and the skills required to understand the data are, in many organizations, restricted to business intelligence teams and IT specialists.

As enterprises tap into the full potential of their data, leaders must work toward empowering employees to use data in their jobs and to increase performance—individually and as part of a team. This puts data at the heart of decision making across departments and roles and doesn’t restrict innovation to just one function. This strategic choice can foster a data culture—transcending individuals and teams while fundamentally changing an organization’s operations, mindset and identity around data.

Organizations can also instill a data culture by promoting data literacy—because in order for employees to participate in a data culture, they first need to speak the language of data. More than technical proficiency with software, data literacy encompasses the critical thinking skills required to interpret data and communicate its significance to others.

Many employees either don’t feel comfortable using data or aren’t completely prepared to use it. To best close this skills gap and encourage everyone to contribute to a data culture, organizations need executives who use and champion data, training and community programs that accommodate many learning needs and styles, benchmarks for measuring progress and support systems that encourage continuous personal development and growth.

Here’s how organizations can improve their data literacy:

1. LEAD

Employees take direction from leaders who signal their commitment to data literacy, from sharing data insights at meetings to participating in training alongside staff. “It becomes very inspiring when you can show your organization the data and insights that you found and what you did with that information,” said Jennifer Day, vice president of customer strategy and programs at Tableau.

“It takes that leadership at the top to make a commitment to data-driven decision making in order to really instill that across the entire organization.” To develop critical thinking around data, executives might ask questions about how data supported decisions, or they may demonstrate how they used data in their strategic actions. And publicizing success stories and use cases through internal communications draws focus to how different departments use data.

Self-Service Learning

This approach is “for the people who just need to solve a problem—get in and get out,” said Ravi Mistry, one of about three dozen Tableau Zen Masters, professionals selected by Tableau who are masters of the Tableau end-to-end analytics platform and now teach others how to use it.

Reference guides for digital processes and tutorials for specific tasks enable people to bridge minor gaps in knowledge, minimizing frustration and the need to interrupt someone else’s work to ask for help. In addition, forums moderated by data specialists can become indispensable roundups of solutions. Keeping it all on a single learning platform, or perhaps your company’s intranet, makes it easy for employees to look up what they need.

3.Measure

Success Indicators

Performance metrics are critical indicators of how well a data literacy initiative is working. Identify which metrics need to improve as data use increases and assess progress at regular intervals to know where to tweak your training program. Having the right learning targets will improve data literacy in areas that boost business performance.

And quantifying the business value generated by data literacy programs can encourage buy-in from executives. Ultimately, collecting metrics, use cases and testimonials can help the organization show a strong correlation between higher data literacy and better business outcomes.

4.Support

Knowledge Curators

Enlisting data specialists like analysts to showcase the benefits of using data helps make data more accessible to novices. Mistry, the Tableau Zen Master, referred to analysts who function in this capacity as “knowledge curators” guiding their peers on how to successfully use data in their roles. “The objective is to make sure everyone has a base level of analysis that they can do,” he said.

This is a shift from traditional business intelligence models in which analysts and IT professionals collect and analyze data for the entire company. Internal data experts can also offer office hours to help employees complete specific projects, troubleshoot problems and brainstorm different ways to look at data.

What’s most effective depends on the company and its workforce: The right data literacy program will implement training, software tools and digital processes that motivate employees to continuously learn and refine their skills, while encouraging data-driven thinking as a core practice.

For more information on how you can improve data literacy throughout your organization, read these resources from Tableau:

The Data Culture Playbook: Start Becoming A Data-Driven Organization

Forrester Consulting Study: Bridging The Great Data Literacy Gap

Data Literacy For All: A Free Self-Guided Course Covering Foundational Concepts

By: Natasha Stokes

Source: Why Your Workforce Needs Data Literacy

.

Critics:

As data collection and data sharing become routine and data analysis and big data become common ideas in the news, business, government and society, it becomes more and more important for students, citizens, and readers to have some data literacy. The concept is associated with data science, which is concerned with data analysis, usually through automated means, and the interpretation and application of the results.

Data literacy is distinguished from statistical literacy since it involves understanding what data mean, including the ability to read graphs and charts as well as draw conclusions from data. Statistical literacy, on the other hand, refers to the “ability to read and interpret summary statistics in everyday media” such as graphs, tables, statements, surveys, and studies.

As guides for finding and using information, librarians lead workshops on data literacy for students and researchers, and also work on developing their own data literacy skills. A set of core competencies and contents that can be used as an adaptable common framework of reference in library instructional programs across institutions and disciplines has been proposed.

Resources created by librarians include MIT‘s Data Management and Publishing tutorial, the EDINA Research Data Management Training (MANTRA), the University of Edinburgh’s Data Library and the University of Minnesota libraries’ Data Management Course for Structural Engineers.

See also