Never before have we had so much information readily available at our fingertips. We literally have the world in front of us. Want to see something new and exciting? No problem. Want to learn a new language? You sure can. Want to learn a totally new topic or learn more about one? You can do that too. Not only that, but emerging technologies are being developed, advanced, and disseminated at record speeds. Teachers are using new technologies in the classroom that were not available in years past, people are wearing technology on them (e.g., Fitbit, Apple watches), doctors are using technology to tackle cancer, and even technology is being used to determine the chemical makeup of objects or people. In each case, technology is being used to gather information and learn something new.
But is there a point where the good (in this case gathering information and learning something new), can be used as bad? I certainly think so. Lets take the atomic bomb as an instance. The science behind the atomic bomb was a huge breakthrough in science – the nuclei of a heavy element could be split resulting in a sudden release of explosive energy. This discovery changed science forever. However, people quickly seen the implications of this discovery in a time of war. The atomic bomb was born and used for complete devastation. This started out as an advantage to science but was used to devastate the world.
So what can we learn from this? A lot in hindsight, but ethical decision-making should definitely be a skill we all have from a young age. In learning technologies, I don’t feel we are at the point of extreme devastation like the atomic bomb, but I do feel good intensions can quickly turn bad. We can see this frequently in the classroom. Teachers are told this technology is great for x, y, and z – it’s the way to go when using technology in the classroom (some even put I promise). Terms like “engaging,” “beneficial,” “learning” are all used but no support is really provided. Teachers give the program a try and results are less than expected. Students are not increasing their skills, they are not engaged with the material after a few weeks, and the teachers are frustrated with the extra amount of preparation they have to do in implementing the program into their classroom (e.g., training themselves of how to implement the program) as well as the lack of results they were promised. The program had great intensions but results were worse. Teachers ended up wasting valuable time in teaching students and resources in the classroom that are very limited. Or what about using social media as a means to learn? What if it goes wrong and cyber bullying prevails? That was not the intension but it happens. So at this point, was it ethical to implement this program in the classroom? It’s hard to say since it’s not a clear right or wrong. There are a lot of gray areas – I think this is where good intensions can sometimes go bad.
Another area I’m concerned with is if we can do it does it mean we should? For example, scientists have developed technology that can be used to determine the chemical makeup of everything in our physical world. Pretty cool, right? By using the handheld device, it will be able to scan and tell me things like what I had for lunch, what a tree is made of, what is in the food I eat, etc. Students could use this device to learn about the world around them. Not only that but it can be used to determine the chemical makeup of things we typically don’t think of like the air outside. Police can use this to determine if drugs are being used in houses (without even going inside). This brings a whole new level of police investigations, public relations, and evidence to the table. But is this ethical? Some say yes (to an extent) others say no. This brings me to my next concern: an individual’s safety and privacy. Would this be an invasion to some people’s privacy? Another more common example is the access to technology. This increases the amount of data that is produced and collected by researchers. On social media systems and commercial sites, people freely share surprising amounts of private information – which becomes searchable and discoverable. What is considered “free-game” to researchers? What about informed consent? I couldn’t find clear answers to these questions.
Ethics is not a clear yes no situation and there’s often a lot of gray space. I strongly feel students of all disciplines should be required to take a course on ethics within their discipline. Things like do no harm, beneficence, justice are all topics that are easier said than done. Students really need to be able to dig in and reflect on what this actually means and how this impacts the individuals we are working with. It also provides a community of colleagues that are there to discuss varying views and solutions to problems that may arise. I also feel ethical review boards play an important role within the field of ethics and research. This process requires researchers to plan ethical aspects of their study from the very beginning. I find this especially important when dealing with humans and non-humans as participants. It’s also a good idea to have others looking over your proposals and research to hold you accountable for ethical research. One concern for me is making sure IRBs have a variety of different disciplines sitting on the board. Having all one discipline may foster a very specific culture that persuades the approval of specific researchers or research methods. As for the process itself, I do not have necessarily any concerns. I have submitted an IRB for a project I was independently working on as well as contributed to several at my job. One thing to keep in mind is that IRBs usually take a while (depending on your participants). One time I was on a project that took a very long time to get IRB approval while the other time it was very quick. Judging the amount of time it takes could become a problem when I would like to start doing research.