In this post, I will document some of the lessons I learned as a data-driven university teacher of a legal design class called Modern Lawyers.
My goal was simple: to create a learning experience that works for my students and achieves the intended goal of the class.
What was the class about?
The class is about innovation, tech, and design in the context of law combined with creative critical thinking and multidisciplinary approach, where law is the tool, not the answer.
Our class covered various topics related to legal tech and design, including legal design, UX methodologies, legal project management, legal operations, founding a legal tech startup, and cultivating curiosity and agency.
The students also worked on their own independent projects, which resulted in mock-ups and prototypes.
Why I became a data-driven teacher
A full disclaimer: I am a data enthusiast, but I was quite often confronted with how much I still have to learn in data science.
I have been teaching students and professionals for years, so the teaching part feels very natural to me (in fact, it is, just like anything in life, a matter of practice and here is how I do it).
But this was different, as I stood before the challenge of designing full 11-week-long course from scratch. On a topic that has barely been even touched before in my country and was as out of the box as it gets, double so at a Law School.
My standpoint from the beginning was that I don’t know anything and that I really care about my students having a great learning experience.
So I approached the design of the class as I would any other product: I made my assumptions, tested and prototyped them, and collected a ton of data to validate and make more informed choices.
Various sources of data used
Since I started with the assumption that I have no idea what my students want and need, I started off with a prototype based on my shorter term projects experience while collecting tons of both qualitative and quantitative data.
Entry and Exit Questionnaires
I presented the students with two aligned back-to-back questionnaires at the beginning and the end of the semester.
There were three types of questions included in the entry questionnaire with the following goals:
- Questions relating to the validation of the course goal – including self-reported perceived creativity levels and familiarity with technology and design
- Demographical – their work experience, comfort level with technology, and planned line of work
- Organisational – questions including logistics, if the classes can run over a bit, and the like
The exit questionnaire then reflected the progress and overall feedback on the class.
Check-in and Check-out
At the beginning of each class, the students were asked to fill out a quick questionnaire to gather how we are right now. At the end, there was again a back-to-back aligned form for feedback and recording the changes.
The questions included their energy levels (the class was quite late), willingness to speak, if they read the readings etc.
My primary objective with check-in questionnaires was to adjust activities based on student energy levels, ensuring they were not too overhyped or sleepy.
Secondly, it helped me to realise, if I am the one putting them to sleep. Turns out I wasn’t – students left the class on average 12% more energised, with the average moving from 1,82 (neutral, rather tired) to 2,06 (neutral, leaning to energised).
Finally, it provided valuable and immediate feedback on specific activities or how well the lessons resonated, or what should be tweaked.
Reverse mentoring
In between semesters, we sat down with 6 students (roughly 30% of the students who passed the class), who volunteered their time and energy to “unpack” the class with me.
We discussed the key design points, and every single one of the major design decisions. I especially appreciated the critical feedback, where we took a hard look on which parts should be tweaked and how.
We did a similar session after the first class of the next semester with some of the students who chose to come and observe in action.
I couldn’t be more grateful for this incredible support (shoutout to my students, you are the best!)
ChatGPT
I conducted a user interview with ChatGPT to discuss educational concepts related to legal tech and design.
Funnily enough, this was in December, so we were basically done with the class. But I am happy to report that we were on the same page. Especially on the combination of experiential, self-directed, and peer-to-peer learning.
I then came back to the Bot later to brainstorm some further ideas and get a sounding board for some of the ideas. Especially helpful for the Gen Z perspective.
What I learned by collecting data all semester?
Below are just some of my learnings. In case that you would like to know more, please message me, I basically can’t talk about anything else lately.
I also added some action points, so you can easily implement these findings into action, if you wanted to do so.
Students want a clear understanding of the purpose behind each activity and project in the course
The activities that fared worse in all feedback streams were those that did not have a very strongly emphasised why (even if I considered it straightforward).
While the students enjoyed discovering their abilities, they felt most confident when they had a well formulated learning objective statement.
Action point: emphasise the learning outcomes regarding each activity.
Students value being asked for feedback, as long as they believe their feedback is taken seriously
I was really afraid that the students will get really annoyed if I keep on pushing the forms on them. And I do think that they kind of did (especially because I did everything with MS Forms in order to automate 99% of the data collection and cleaning).
But they seemed to be ultimately ok with that, because they saw a real impact of their feedback on the class and their own experience.
Once they realised that the feedback is essentially a way how to vote on what is going to happen and that I am very serious about listening to them, it was all smooth sailing. (And the levels of written feedback were steadily increasing).
I think I got some extra points for attending to the funny requests.
Action points: only collect feedback if you intend to act on it.
Shifting gears works
Maybe it is our dopamine-driven society, but classes that involved a lot of novelty and shifting gears were the highest ranked ones.
No two classes were the same and I made super sure to involve a lot of different activities in a lot of different contexts.
I spent an awful lot of time over chairs and desks arrangements, as well as tuning various range of music (shoutout to Stanford d.school playlists). The possibility to walk around, stretch, and change contexts really helped in adding some extra dynamics to the class.
Action point: make sure to play around with your surroundings.
I am way too hard on myself
This was a very funny one to discover, albeit not terribly surprising.
Invariably, I give the classes much lower scores than my students in terms of impressions. This could be because I have much greater visibility on the topic and what I wanted to do. Often, I would have to adjust the lesson plan because of things taking longer than I anticipated.
The students didn’t seem to realise this, so I guess the conclusion can be that I am pretty good in improvising.
How am I going to change that in the next semester?
While I will swap around some of the activities and change the setup of some of the design challenges based on the feedback, below are some of the hard lessons regarding data:
- Adjust the scale (making everything a 1 in 4 voting)
- Make sure that students do not report late (by having a QR code with the questionnaire up the entire second half of the class in case students need to leave early)
- Log the approximate percentages of when I speak versus when they speak (my personal goal is 30:70)
- Log the details of the activities (such as if they are more scoped or more free-designing) to find the right balance of agency and constraints
- Record how much time I spend in preparation (this is a wonderful irony after 5 years of complaining over time recording)
- Record how comfortable I am with the topic, or how much I like it, respectively (I get very hyped about some topics, as you probably noticed reading this blog) and how that influences teaching it
- Learn more about confidence intervals and further data science context.
Final Thoughts
Being a teacher is hard, but I would not change it for the world.
It was stressful and I was 150% battling an imposter syndrome like never before.
But moreover, it was one of the most rewarding things I have done in a very, very long time. I just feel so enriched by my students viewpoints and how they took up the challenge.
The class received very good feedback in both the data collected by me as well as in the institutionalised university feedback (100% As on 66% response rate).
But for me the most valuable were the personal notes by the students who said that this really changed how they look at things. That they now think more about how to improve the world around them. That they now feel more empowered in making bold career choices.
And that really, really, really made my day.
Thank you for reading this, it means the world to me.
Baru
Note for fellow educators and enthusiasts
Hello! If you are reading this and have any points, or something that worked for you and your classes, can I please ask you if you could reach out to me? I would love to talk to someone who has been through a similar experience and learn. You can reach me out on LinkedIn or at baru@attorneyatcode.com
