One of the things that I have been thinking quite a bit about lately is the term “literacy.” As scholars and teachers, we often use “literacy” to mean “the ability to read and write,” but, practically, it always means more than that. It’s well established that what scholars and educators and standards-writers really mean, especially at the secondary level, is not just reading, but something closer to “the ability to read and synthesize information from a text.” Not just writing, but “the ability to respond coherently within the boundaries of a specific genre.” These are basic social literacy ideas, of course – as Gee says, a student’s reading and writing practices and products need to fit into an acceptable form of school Discourse to count as demonstration of knowledge.
I’ve been thinking about this lately in the context of the new NAEP draft framework for what they are calling technological literacy. I found out about the invitation for public comment and took the survey just before it closed. Here’s a little about what I thought.
Because it’s formalizing a field of instruction for schools, this framework (downloadable from this front page — warning, it’s a huge 175-page pdf) takes on many huge questions, the biggest of which are “what is technology?” and “what is technological literacy?” The board, which includes a large group of specialists from tech industry, education academia, engineering academia, and the like, won my heart early on by setting down broad definitions:
“Technology is any modification of the natural world done to fulfill human needs or
“Technological literacy is the capability to use, understand, and evaluate technology as well as to apply technological concepts and processes to solve problems and reach one’s goals.”
After reading those definitions, I hoped that the framework would focus on ideas about assessing technology practices — perhaps ways to think about how students learn various technical tools to help them accomplish various tasks, as well as how they learn from the tasks. For instance, the traditional school task of information gathering and reporting can be enhanced by technologies like multimedia simulations, graphic representation, and broad search for multiple sources. That’s just one example of what I see as the point of educating students in technology: Helping them to see new possibilities and learn more effectively by using, reflecting on, and evaluating productive tools.
The other part of the draft that I found exciting was the breakdown of “technological literacy” into “technology and society,” “design and systems,” and “information and communication technology.” There are many great research projects investigating such ideas as bringing design thinking, technology-enhanced writing and communication, and mobile technologies into the thinking on education. From where I’m standing, these directions seem like great ones to pursue. So many of the current “knowledge production” fields — media design, news writing, research, information design, engineering, planning, medicine, programming, politics, business development — rest on creative uses of analysis and design skills that come out of these areas. Our current elementary and secondary educational system does little to develop them due to a tight focus on “curricular content,” or, essentially, remembering what’s in the books.
I wanted this framework to take a stab at creating the conditions for schools to begin changing at least some of that. And they do. For instance:
“…[T]he framework specifies three [practices] in particular that students are expected to demonstrate when responding to test questions: Identifying and Applying Technological Principles, Using Technological Processes to Solve Problems and Achieve Goals, Communicating and Collaborating.”
Drawing on the collaborative potential of technologies seems particularly revolutionary in schools — places that often downplay the impact of collaboration by assessing knowledge through individual testing and frequently demonize it by punishing those who cheat. This assessment framework highlights collaboration as important, however. For instance, the table in section 3-4 includes “assessment targets” like analysis and justification, team planning, group critique, and communication with multiple audiences. Potential student tasks, in section 3-7, draw on team-based tasks like designing an investigation into the community impact of new energy technologies, simulating and testing the adequacy of systems, and organizing a campaign around a community-based issue.
At the same time, though, this framework kept reminding me that it exists squarely within the current system of standardized testing. Despite the talk of using real information and communication technologies, as well as encouraging collaborative projects, the framework does not appear to propose a real challenge to the methods of individualized assessment or the general belief that children must be able to solve problems on their own before we can believe that they truly “have” “skills.”
While the subject areas and practices discussed seem solid and forward-thinking to me, I am concerned about language that suggests a tendency to teach and explain and reduce these areas to informational items that might become textbook chapters and multiple choice questions. I am heartened by the inclusion of a broad and wide technological framework with references to major contemporary problems (agriculture, air quality, etc), but concerned about what this report seems to hold as necessary: That these subjects must be taught and tested in traditional, time-limited, and bubble-tested ways. For instance:
“Select a team of people who could design and build a new toy for a 5-year-old, and justify the choices. Work individually, or collaborate with a virtual person to make your selections.” (from Table 3.3 — I can only assume “collaborate with a virtual person” means that a simple AI/NPC will be built into the testing space?)
“Why do Bill and Sally oil their bike chains and axles and check the brakes each month? What may happen if they do not?” (from table 3-13)
“In this simulation, students navigate among a file manager, an e-mail client, a Web browser, a word processor, and a spreadsheet to make a travel brochure for a fictional town, Pepford. They are assessed on how they use these ICT tools to accomplish the task, not on the quality of the brochure. The process is more important than the outcome…”
Much of the structure of the sample items resemble those of math or science assessments, so I want to know how the NAGB / NAEP sees this assessment as distinct from currently-existing science and math subject tests. I feel like a portfolio assessment completed over the course of a year would be much more effective in getting a sense of students’ growth in actual technical and information-based skills, so I want to know whether the NAGB / NAEP explored any alternative assessment structures.
There are no real answers for these questions, and no real discussion of broader systemic implications: In the best case scenario, this assessment will push schools to engage students in collaborative problem-solving with real technologies in order to build their technological literacy… but will the students then be able to answer reductive test questions individually and effectively? Will schools need to restructure in order to fit in a new “technology” subject now that students will be tested on it? What will the professional development for teachers look like?
Perhaps most significantly, the frequent references to the difficulty of developing the draft framework without “existing item banks” of technology-oriented questions were disturbing to me. This terminology suggests that the eventual test will score students on what they have in their heads, an ironic feat for a test on technological literacy. When technologically literate people approach a thorny problem, they rarely sit and think on their own. They use the technology available in their pockets and at their fingertips to search for similar problems, frame their thinking about the problem, or gather information related to the problem. It is precisely these abilities — navigating a distributed knowledge space, using informational tools — that make them efficient thinkers and solvers.
Students can and should develop a full sense of things like systems thinking, information range & bias, team collaboration, communication of ideas, and the need for attribution — and, more importantly, a sense of who needs these skills and why they are useful. “Technological literacy” should not just mean understanding “technology” in the abstract, but how technology can be used to engage and expose real-world systems. “Technological literacy” involves research into, consideration of, and responses to real-world problems. These skills — not canned textbook tasks — should be paramount in classrooms because they are foundational for life in an information-based society. Plenty of seminal resources and studies (Barab’s, Squire’s, and Gee’s work on learning from games; Shaffer’s epistemic games; even Papert’s LOGO studies) demonstrate even young students’ success and interest in this kind of learning with, from, and about technology. Of course, actual investment in this sense of technological literacy would require far more time, teacher feedback and training, and project-oriented curricula than the NAEP draft framework seems to suggest is necessary or desirable.
In the end, I am cautiously optimistic about this document’s ability to open up the conversation — the educational community should think about what technological literacy is and how we can teach it within the boundaries and structures of schools. But at the same time, I am concerned. The educational establishment has the power to turn vital disciplines into series of textbooks and lectures, and to sap interest and complexity by letting the pressure of recall-based assessments force teachers to cover too much content far too quickly.
Discussions of learning are so often bound by how we currently do and assess school, and I don’t want to see that happen here. I fear it’s going to.