Building a smarter computing culture in Fargo, ND
Kevin brought up an interesting point yesterday: he noted how there is a perceived ratio difference between the above devices in their bias toward learning versus entertainment (or edutainment).
Each of these devices, create a different experience due to their hardware and software makeup. Rob Kitchin and Martin Dodge, in their (software studies) book, Code/Space, analyze code spatially, arguing that there’s a mutual constitution between software and sociospatial practices. (16) According to Kitchin and Dodge, the code within software(s) produce spaces that “are subtly evolving layers of context and practices that fold together people and things and actively shape social relations.” (13) They contend that “Software … contributes to complex discursive and material practices, relating to both living and nonliving, which work across geographic scales and times to produce diverse spatialities.”
Essentially, Kitchin and Dodge suggest that our production, use, and reproduction of software creates a dyadic and recursive relationship between us and the software that produces these code/spaces. These relationships are reflexive in both the material and immaterial spaces, creating a multifarious context from person to person, depending upon where, when, and how we use these software(s). They define code/space as
any space that is reliant and contigent on software/coded infrastructure(s); simultaneously global and local, one and many, territorialized and deterritorialized. Software augments the spaces functions as well as ‘quite literally conditions … existence.’ (Thrift and French 2002, 312).
Currently, within our educational system, we have competing devices that produce very different code/spaces. The problem that I see is that our school and government officials, policy makers, and decision makers are not good at understanding the purposes and biases of these particular devices. Part of our project is attempting to refine the concept of computer literacy to include a deeper set of tools in these code/spaces. This way, the next gen of computer users are more than just users, instead they are also varying levels of producers with varying levels procedurate literacy skills. At the very least, they understand the coded infrastructures at work and how the WYSIWYG GUIs of this world are built upon something worth exploring and analyzing.
To help reign in these two forces of code/space and computer literacy, I recently read a blog post by Alex Reid called “On tools and concepts.” In this post, Reid explored the differences between the spaces of meaning around learning and concepts within the scope of pedagogical practices. Learning, as Reid wrote, “can be defined as the invention of concepts, at least on the individual level,” where concepts become the time and space (<= my words there) where work can be done.
Alan Kay just recently gave a lecture on this subject regarding computer programming and scaling, where he criticizes even his own old system of object oriented programming language as an old concept that needs to be surpassed so that our softwares don’t become so complex that we are unable to understand them at all.In the screen capture of one of his slides, he compares lines of codes to that of books, where, for instance, Windows Vista’s 120,000,000+ lines of code amount to approximately 6,000 books (at 400 pgs/book). This comparison is interesting on two levels, as it forces us to consider code on a literate level, as well as how ridiculous it is for us to think that one person, or even a small team of people, could analyze such an amount of text — one in which creates a code/space that many around the world are highly engaged with. (Although, Vista is most likely not a good candidate for that assertion.) Allthistosay, programming languages aren’t getting any less complex and the degree of complications only gets worse as we “progress,” which is the word that Kay criticizes initially in his lecture.
So, in lieu, of considering software as a constructed code/space, where, for the sake of this post, I want to begin to try and understand better these three different devices that each create different code/spaces. I hope to consider how each device has a bias toward a varying type of learning toward a varying type of conceptual practice. For the sake of some kind of brevity, I will look at the three devices in three different posts, starting with the iPad.
**This is where I will also provide a disclaimer that this examination is predominantly meant to flex some learning analytic muscles, and I hope some discussion is derived from these posts. I am mainly writing it because I have been reading many different texts, and want to take them for a “test drive”.
Apple’s iOS GUIs are very intuitive for the user. It is as simple as tapping the screen, which my 4 year old daughter can do quite easily. The ability to swipe and move things is also intuitive, creating a space with some affordances for customization of app placement and choice of background. Beyond this, it is very much a proprietary device. To be able to customize the device any further is very difficult, since the code is not only invisible, but impossible to see unless hacked (or jailbroke), which wouldn’t be an accepted practice in too many institutions of learning.
So, what does this code/space implicitly suggest for the user with regards to learning computer literacy skills? It’s code/space has a bias toward consumption. If you have a need, Apple even advertises, “There’s an app for that.” Walter Bender, during his TED talk in Brussels, suggests that his work with Sugar Labs is trying to combat that type of technology consumption. A user cannot easily become a developer with an iPad, since the iOS is not designed to develop code. The device is built with a consumption loop, where a person must turn to the store for answers and tools. Instead of learning geometry concepts by developing one’s own app, a person must rely on someone else to develop it for them to learn by using — instead of developing.
The ratio seems to be in favor of entertainment. Afterall, the apps that are most popular are the games, which some are supposed to be more educational, but still sway to the side of edutainment, producing invisible rote skills (like learning your multiplication table) versus learning by creating and exploring concepts. This is particularly true with regards to computer literacy skills, since the device, again, is proprietary.
In fairness, users can turn producers, and write their own apps, to be filtered through the approval process into the App Store. So, the user has the potential to build on top of the iOS with apps, but Apple also charges an initial fee of $99 to submit the app, as well as take a 30% cut from the sale. You also need to build the app on a mac, and obviously test it on a device, so there are many walls for anyone (notably, a child) to start developing an app for the official store. I think the proprietary bias of the Apple iOS creates a code/space that promotes code development to those who are entrepreneurial, but also on the well-endowed side of the digital divide.
The code/space also, from what I understand, doesn’t make it easy to customize the overall OS environment. And why would you want to tinker with it? Break it? Or, change the OS? Because it works, right? For me, the Ipad (or any proprietary device) has that implicit suggestion that Douglas Rushkoff takes on, the notion of the computer as a completed product. You buy it and you use it accordingly. You update the software and use it until it becomes noticeably slow or when it breaks down completely. Then, you buy a new one.
The iPad, to me, has become a device that represents this media ecology and practice. The wave of touch devices since the iphone has created a frenzy for a streamlined, easy user experience, which is good and important as a future goal for procedurate skills. Yet, this type of device creates an issue within environments where learning is supposed to occur. The iPad’s bias toward seeking an answer via the “app for that” culture assumes the answer is already there for the taking. If these devices become standard in schools, how will we prepare them with computer literacy skills to compete in a future job market dominated by CS-tech careers? (See Larry Cuban’s post about a school in Maine who has purchased iPad’s for kindergarteners.) If these devices are transformative, do we really understand how, why, and what the device is doing for/with us?
If we really want to create smart computing culture with a deeper set of computer literacy skills, then we need to consider how to be able to tinker with these code/spaces, or if we can at all. The iPad seems as though it contributes a breakthrough in design and functionality that can create numerous different types of spaces, depending upon what app is running and how the user is using it, but the system is in lockdown with little room for computer literacy skills to be learned beyond tap-style navigation techniques. Essentially, it tags on Nicolas Carr’s Shallows and Google problems. From our project’s persepctive, (and, yes I have a bias here, despite being an iPhone user), it is not the device for us to explore a deeper set of computer literacy skills.
Personally, I’d rather have my four year old daughter using Sugar on a Stick on a cheap laptop, starting to break it, tinker with it, and see how it all works under the GUI hood.