What ChatGPT suggests for career decision-making

ChatCPT and open AI chatbots are currently all the rage, with potential possibilities such as generating mashups of ideas in creative ways in a manner of moments, to problems such as generating entire papers that are undetectable for plagiarism, or stealing artwork and creating derivatives without artist permission or compensation.

Can ChatGPT be helpful for would be career decision-makers? What happens if I ask ChatCPT for help? No need to wonder – I did, and here’s the response.

What do you think? Not bad advice. It’s missing the “how to’s” – like how to do a self-assessment, and the “where to’s” – like where do I research information. There’s some other aspects that career researchers and practitioners will recognize as missing right away, such as thinking about your career path to this point, considering emotions, what barriers and support you have, and that the process of making a decision is more than just making a list, checking it twice, and trying it out. But, still, the advice seems pretty solid. So, I decided to drill down on self-assessment:

Hmm – this response is VERY interesting. Why these specific assessments? How did these rise to the top of the algorithm? MBTI is listed as #1? I’ve seen the research and it’s not pretty when you look at the MBTI and career. So, I asked a follow up question:

So the MBTI and the DISC have high levels of reliability and validity? I’ve never heard that, and have often heard the opposite. So, I queried a bit more, this time asking for references.

Hmm. See that I asked for research supporting – I did not ask for “any research” or research that negated – and what I was given for the MBTI was 2 articles against. Not ONE article in support – and yet, in the earlier search, the bot stated that both had HIGH reliability and validity. What? That seems a bit contradictory, doesn’t it? The second citation for the DISC is just confusing – it’s a book from 1986, with no clear link to the DISC. If it’s been around for decades, why are the first 2 citations given for it decades old and not particularly relevant? And do you see the takeaway for both? Should be used with caution? Yet, these were among the top 5 recommended?

What should be the takeaway? Once again, it seems like the AI chatbot COULD be useful for people wanting to make career decisions, but at the same time, has CLEAR LIMITATIONS and PROBLEMS. Most individuals will not ask the AI for the psychometric properties of an inventory, and if they did, they probably also wouldn’t ask for references – and if they did, they might see references posted and assume those references actually supported their statements. I have found the thought/reference citation matching to be way off and clearly wrong in some cases, on other AI searches I have conducted. More importantly, the AI’s advice is straightforward and simple, and while many career decision-makers can likely follow those steps and come to an informed career decision, the process of career decision-making is often more complex than this – otherwise our clients wouldn’t seek help if it was that easy. Research (e.g., Hayden & Osborn, Walker & Peterson) has repeatedly shown how other issues such as mental health, negative career thoughts, and external experiences such as discrimination, socio-economic status, and the like all impact our ability to make career decisions -and yet the bot did not acknowledge these. While they did suggest talking with a career practitioner(Yay!)- they didn’t share how to find one (Boo!).

As a career practitioner, it’s important to be aware that just as patients may google their symptoms and self-diagnose before seeing a doctor, our clients may do some searching online about how to make a career decision and come in bearing the outputs of the AI’s advice. I see this as similar to when someone takes a “color” test or a Facebook survey that tells them the career they should pursue and brings it in. You don’t insult the client by dismissing the results altogether, but instead use it as a launchpad for discussion. What did the “results” say that resonated with you? Would you like to take a different inventory that has been developed and researched for career interests (or skills, or values, etc.) or try a different approach (like card sorts, or writing your career story) to see what that might reveal?

It’s also part of our responsibility to help clients evaluate information about self and options (Sampson et al., 2018)- not in a preachy kind of way, but demonstrating how some of the information can be accurate and some of it may not be, and how to sift through it. This could be done by modeling, and searching, in the moment, online with the client and the practitioner speaking outloud as they evaluate links that come up (oh, this looks like it is a sponsored site or this is by the author, so that information is likely biased, or this was published in 1986 – wonder if there’s anything newer).

All in all, ChatGPT and AI chatbots are ultimately a tool that some clients will use to help them in their career decision-making process. It appears that some of the information is appropriate and may be helpful. We may never see those clients in our office. For some, that information will prove to be overwhelming as they seek to follow the straightforward steps, and that may lead them to knock on our door. In that case, the online conversation was the impetus to get help, so a net positive.

But what about those who are overwhelmed, and see this as yet another failure or additional proof that they can’t make good decisions – and as a result believe that seeking help will likely ending up in the same outcome? This is troubling. Perhaps our professional associations can seek to provide guidance to the AI developers to include that will encourage information and guidance seekers to seek support from professionals, or at least let them know that such support exists. Maybe as practitioners, we need to be more proactive in asking our clients about their online exploration. Ignoring it doesn’t mean it’s not happening. Bring it into the discussion. Don’t demonize ChatGPT but don’t canonize it either. It’s one resource among many – and it’s how we use it that will affect the outcome.

Hayden, S. C., & Osborn, D. S. (2020). Impact of worry on career thoughts, career decision state, and cognitive information processing-identified skills. Journal of Employment Counseling, 57(4), 163-177. https://doi.org/10.1002/joec.12152  

Sampson J. P., Jr., Osborn, D., Kettunen, J., Hou, P-C., Miller, A. K., & Makela, J. P. (2018). The validity of socially-constructed career information. Career Development Quarterly, 66(2), 121-134. Retrieved from http://purl.flvc.org/fsu/fd/FSU_libsubv1_scholarship_submission_1521225668_d3959a6c doi:10.1002/cdq.12127

Walker, J. V., III, & Peterson, G. W. (2012). Career thoughts, indecision, and depression: Implications for mental health assessment in career counseling. Journal of Career Assessment, 20, 497–506. https:// doi .org/10 .1177/1069072712450010 

Increase Class Engagement in Via Random Name Picker

As an instructor, I want to know that my students are engaged in the content. But, whether in an online zoom room or face-to-face, it’s impossible to tell if they are on task if I say “Take five minutes and go work on this yourself.” Some will, others won’t. The one’s who don’t often rely on the extraverts in the class to offer up what they’re working on, and there’s never enough time to see everyone’s examples, so they are let off the hook. One option, is to make them pair up and share what they’ve been doing, but even then, I’m not convinced that meaningful conversations are really going on as opposed to discussion of weekend plans. And that might be OK – we all need that kind of connection – but the problem comes when they have an actual assignment that requires the skills we just “practiced” and they do poorly on that, and I have to spend more time going back over the material.

Enter in a new tool I just found: https://miniwebtool.com/random-name-picker/

It’s not the prettiest thing -when you show it, it includes ads and all – but it is efficient and practical. You input the names of your students and then choose the # of names you want the randomizer to choose at once.

Click on pick a random name:

It then spins a pretty wheel that rotates through all the names, building the anticipation:

And then lists the winners like this:

The first time I did it, and called on the winners to share, there was the typical “Oh, I didn’t write anything down” reaction. But, as the group realized that it was truly random, and you might be called on multiple times, or not at all, and every time, it was random, it increased not only their engagement in the activity, but also after I had the 2 or 3 share, I had others wanting to share their examples, so left time for that. You can always remove a name from the list if they’ve been chosen a few times and you want to give others a chance, too. It was fun to hear their chatter as they watched the wheel spin, as well – brought an extra excitement to the room.

Happy to share this fun and practical tool. Hope it works for you!

Tech Twins offering a seminar this week!

We are excited to be back in action this week for the Career Counselors Consortium Northeast.

Still time to register at https://careercounselorsne.org/event-4721372.

Hope many of you are able to join us. We love sharing what we know and learning what you know!

Collaborative & Individual Learning Via Zoom

Problem: Students often limit the definition of a construct (e.g., vocational identity) by a measure (e.g., My Vocational Situation), and by doing so, have a very myopic view of the construct.

Goal: To have students learn how to expand upon a construct beyond what one instrument’s definition.

Challenge: Create an activity that requires active involvement from every student, engage higher order/critical thinking, AND, do it all in 10 minutes.

My solution: I chose a construct from an article that we are reviewing in class, and pasted the components of the operational definition of that construct on one side of the table. Then I told them to use whichever research-based database they preferred, and to find another article that offered a different definition, and to paste that into the table using the annotate function in Zoom. Below is a picture of the activity as they were working.

Next steps: When they were done, I asked them to point out differences between the article’s definition and these others, and we discussed being too narrow and too wide in our definitions. The next thing I had them do was work on defining their own construct on a shared document. I chose a shared document because some of them have similar topics/constructs, and I wanted to teach them that it’s OK to collaborate and help peers/colleagues problem solve. This meant that before class, I had to create the shared document, and paste their names, research questions and a table for them to work on in the document.

They had to choose one construct from a study they’ve been working on conceptualizing, and find at least 2 different definitions of the construct, and also list at least 2 different instruments they’ve seen in their searching of the literature that might measure the construct or a portion of the construct. Here’s a picture of 2 students’ work:

I gave them about 15 minutes to work on that. For their final activity, I had them then take 2 of the measure they had listed and conduct an instrument comparison. This took about 20-25 minutes. Here’s an example of one student’s work:

Reflection: Overall, I thought this process worked well. I demonstrated the technique using a shared article, challenging them to find alternative definitions. They then applied this skill to their own work. I shared my screen but told them they didn’t have to follow me. I worked with them, if someone was stuck, finding a definition or an instrument or details (like cost), and asked them to help each other. I did have some other modeling prepared, but didn’t think we’d have enough time to work through that and for them to work on their own stuff, and that the latter was likely more useful for them. I did achieve the goal of a ten minute activity with the annotation, but altogether these three activities took up an hour of class time, so there’s that to consider.

Question: How might you have approached this problem and goal?

Current Career-Related Research Projects

We do have some more research projects focused on technology that are brewing, but in the meantime, we’re asking your help to spread the news about career-related research projects in which we are involved. Would you consider participating if you’re eligible? Or perhaps spread the news if you know someone who is? Each are either approved by Florida State University’s IRB or have informed consent waived. If you have questions about any research on this page, please email Dr. Osborn. Click on the links to participate. Thanks for the consideration and help!

Virtual Card Sort

Who can participate: Open to all.

What is required: Sort 36 cards with occupational titles into “would choose, might choose, and would not choose” categories. 

What you can gain: Users receive a summary report and suggested next steps.

Examination of Childhood Trauma, Dysfunctional Career Thoughts, and Career Adaptability

Who can participate: Any adult age 21 or older to participate.

What is required: Complete a demographic form and three questionnaires on the topics above. What you can gain: Eligible for $30 Amazon gift card if provide email. 

College Career Courses and Vocational Identity Achievement: An Investigation of Mediators and Moderators

Who can participate: Undergraduates from any college or university

What is required: Complete two surveys, one now, and one later. 

What you can gain: Eligible for $50 Amazon gift card for every 50 participants, if provide email.