Thanks for this great summary! Part of my motivation for joining this course is to experience this micro-credentialing system first hand, though I doubt I’ll “go all the way” - maybe just first base…
Sub-questions for my topic
A recursive list…
- Is standardised testing truly accessible? What are its limits in testing learning? How do the costs of standardised testing compare to other forms of assessment? Is standardised testing always the “one to beat” in a comparison? Are there hidden costs and benefits that need to be teased out?
- How can “human-scaling” strategies such as self-assessment and peer assessment help? What controls need to be put in place to ensure (and demonstrate) consistent quality?
- How open is “open”? Should it include self-structured assessment as well as self-assessment? (ie, here is the framework I am using, here is my goal in that framework, here is my path, here is my demonstration, here is how to assess me, and here’s my feedback on your assessment!)
- Should we separate summative from formative assessment? My implicit assumption was that this was about summative assessment, but is there a grey area between these two, such as the point above which relates to metacognition, or when a formative assessment morphs by stages into a summative assessment (self/peer/instructor/evaluator).
- How can new technologies such as AI help? Are these yet fit for purpose or still on the horizon?
Understood. You are most welcome to sip and dip in the pathways you find interesting to get a feel for our OER-enabled micro-credentialing system.
Being entirely open source, our component based delivery platform is very easy to replicate. Any institution or individual who can host a Wordpress site can publish their own instance of an OERu course. So for example, Otago Polytechnic could publish the LiDA101 course for their own students and another institution in Canada could publish their own branded instance of the LiDA101 course. The learner interactions from these cohorts could be syndicated into a global course feed.
At OERu we ran a Google Summer of Code project a few years ago piloting self-assessment for formative purposes. It worked reasonable well in promoting learner engagement and improving retention in this mOOC environment. The piece we didn’t solve with this pilot was the automation piece for assigning peer-assessors to assessment submissions. In a dynamic course with people coming and leaving - there needs to be a scalable and automated assessment to ensure distribution of assessors.
I’ve been toying with the idea of “open sourcing” the development of objective item assessment banks where we engage learners in developing item banks. This has pedagogical value because if you want to learn something - teach it ;-).
With objective item assessment we can monitor the validity and reliability of the “pre-production” test items before they’re included in the test bank including a learner peer review process. This would require some code development for implementation but I think this could be a powerful point of difference for OER when compared to closed models.
Of interest - when developing the OERu’s transnational credit transfer system for summative evaluation - a significant talking point was the student identity piece - i.e. acceptable systems to ensure that the learner completing the assessment is the “registered” student. Notwithstanding the sophistication of online proctoring and other forms of identity validation - this component generates long conversations in the academy ;-).
Interesting problem regarding distribution of assessors… An Open Badge Factory teacher PD initiative in Finland solved this with peer assessment “panels” where teacher learners applied for edtech badges which were evaluated by panels of badge-qualified reviewers, who could opt out at any time, implying a dynamic pool. Approval/disapproval was based on whether “yeas” or “nays” reached a minimum number (like 3) first. It solved a real bottleneck in assessment, but could have been better, IMO: the qualifying badges were subject-related, nothing about assessment. OK, they were teachers, so assessment is in their DNA, but nothing was put in place to ensure inter-rater reliability.
Indeed - here’re our instructions to help interested institutions (whether OERu partners or not) avail themselves of our open materials: https://tech.oeru.org/oeru-mediawiki-wordpress-snapshot-toolchain
For what it’s worth, Binita, I think this is a great topic… I’ve done some reflection on the question when combined with this concept of “open source”… perhaps this post I wrote will give you some avenues for exploration: https://davelane.nz/different-approach-digital-technology-schools
Yes! It does beg an interesting question - could it also be a learning resource if it were big enough?
That’s a smart model - I like it. I can see how code could be developed to automate a system like this assuming a critical mass of participants. Working internationally - we would need to think carefully about language proficiency when matching peer assessors with the language of the assessment. But in theory - I think its doable.
Exactly! It could definitely be a learning resource for sure. Eg ancillary materials for open textbooks etc.
As you point out - with large test banks, having unrestricted and open access to practice sessions would not impact negatively on the statistical validity and reliability of the summative test for credit using the same test bank.
Agreed, this can be a rabbit hole… a bit like blockchain, which can be a sht in/sht out zombie. One reason I like badges is that they can be collected and curated as a systems of stronger and weaker signals, a bit like a star system. Multiple signals, diverse sources, choice of confirmation pathways = triangulation for robust recognition.
Great! Reminds me when I was on the “Reach for the Top” quiz show in high school in Canada - we used old show scripts to rehearse answering the right kinds of questions quickly, including fake buzzers. It got us a bit further down the road, though we didn’t take the laurels.
Research topic: Steps to prevent cyber bullying in schools
I have always been concerned about bullying in schools. I have selected cyberbullying in schools as my topic.
What strategies can a secondary teacher adopt to reduce the incidences of cyberbullying in her classrooms?
My study will be limited to schools in India, with a comprehensive review of available literature from research both in India and abroad. Of late, students in secondary schools have been getting addicted to PUGB, and we have had cases of suicides and rage killing hitting the headlines.
My keywords are:
#bullying, #cyberbullying # secondary school #adolescence #strategies #PUGB #smartphones
I think that is an excellent research topic. It’s meaningful and you have a personal interest in helping to solve the challenge of cyber bullying. You have defined the scope and have limited your focus to secondary schools in India.
Good luck with your research!
Thanks Wayne for your feedback!
My Mastodon account seems to be stuck. Could you please get it activated? I would like to interact with a few of the participants.
Have a great Sunday!
I just checked - Your Mastodon account seems fine. Refresh your browser page holding down the “Shift” button plus refresh together. Log out and then try log in in again to see if that works. The site is working for me.
Thanks for your prompt response…
No I am afraid it still is not working.
My email is:email@example.com.
I have reset my password just now.
I will try again later in the afternoon.
I can see what the problem is - You must enter your own personal email address when resetting your password not (firstname.lastname@example.org) otherwise you will not receive the confirmation email .
Good luck when you try later!