A couple of weeks again, I had the pleasure of attending the IMS Studying Impression Management Institute (LILI). For these of you who aren’t acquainted with it, IMS is the main studying software technical interoperability group for greater schooling and Okay12 (and is making some forays into the company coaching and improvement world as properly). They’re behind specifications like LIS, which lets your registrar software automagically populate your LMS course shell with college students, and LTI, which helps you to plug in many various studying purposes. (I am going to have much more to say about LTI later on this publish.)
When you might not pay much consideration to them for those who aren’t a technical individual, they have been and will proceed to be very important to creating the type of infrastructure essential to help more and higher educating and learning affordances in our academic know-how. As I am going to describe in this submit, I feel the nature of that position is more likely to evolve somewhat because the interoperability wants of the sector are starting to evolve.
The IMS could be very healthy
I’m joyful to report that the IMS seems to be thriving by any obvious measure. The conference was nicely attended. It attracted a remarkably numerous group of individuals for an occasion hosted by a corporation that would simply be perceived as techie-only. Furthermore, the attendees appeared very engaged and the discussions have been vigorous.
On more objective measures, the organization’s annual report bears out this impression of robust engagement. They’ve robust worldwide illustration throughout a variety of organization varieties.
Whether or not your measure is membership, product certifications, or monetary health, the IMS is setting data.
This state of affairs is even more exceptional provided that, 13 years in the past, there was some query as as to if the IMS was financially sustainable.
In the event you look rigorously at this graph, you will see three distinct durations of improvement: 2005-2008, 2009-2013, and 2013-2018. Based mostly on what I know concerning the state of the group at the time, first period can most plausibly be attributed to fast modifications carried out by Rob Abel, who took over the reins of the organization in February of 2006 and certain saved it from extinction. Likewise, the magnitude of progress in the second period is according to that of a wholesome membership organization that has been put again on monitor.
But that third interval is totally different. That is not normal progress. That’s hockey stick progress.
I am not a San Franciscan. By and enormous, I do not consider in heroic entrepreneur geniuses who change the world by way of sheer drive of will. Every time I see that sort of an upward development, I look for a systemic change that enabled a pacesetter or group—via perception, luck, or each—to catch an updraft.
There isn’t a doubt in my mind that the IMS has capitalized on some main updrafts during the last decade. That is an statement, not a criticism. That stated, the winds are changing, partially because the IMS has helped transfer the sector by means of an essential interval of evolution and is now helping to usher in the subsequent one. That may increase some new challenges that the IMS is definitely healthy enough to take on but will doubtless require them to develop a number of new tips.
The world of 2005
Within the first yr of the chart above, when the IMS was in peril of dying, there was little or no in the best way ed tech to interoperate. There were LMSs and registrar methods (a.okay.a. SISs). Those have been the two important methods that needed to speak to one another. They usually did, after a style. There was an IMS commonplace on the time, nevertheless it wasn’t an excellent one. The end result was that, even with the usual, there was an individual in each school or college IT division whose job it was to handle the mixing course of, hold it operating, fix it when it broke, and so forth. This was not an occasional tweak, but a continuing effort that ran from the primary day of class registration by means of the final day of add/drop. In the event you image an old-timey railroad engineer shoveling coal into the engine to maintain it operating and checking the strain gauge every ten minutes to ensure it didn’t blow up, you would not be too far off. As for reporting ultimate grades from the LMS’s digital grade e-book routinely to the SIS’s digital ultimate grade document, properly, overlook it.
For those who ignore a few of the older content-oriented specs, like QTI for check questions and Widespread Cartridge for importing static course content material, then that was pretty much it when it comes to application-to-application interoperability. Once you have been contained in the LMS, it was principally a bare-bones field with not a lot you possibly can add. At the moment, the IMS lists 276 officially certified products that one can plug into any LMS (or different LTI-compliant shopper), from Educational ASAP to Xinics Commons. I am sure that may be a substantial undercount of the number of LTI-compatible purposes, since not all suitable product makers get formally licensed. In 2005, there have been zero, because LTI didn’t exist. There have been LMS-specific extensions. Blackboard, for example, had Building Blocks. However with a couple of exceptions, most weren’t very elaborate or fascinating.
My private experience at the time was working at SUNY Methods Administration and operating a search committee for an LMS that could possibly be centrally hosted—preferably on a single occasion—and probably help all 64 campuses. For many who aren’t acquainted with it, SUNY is a highly numerous system, with the whole lot from rural (and urban) group schools to R1s to all the things in between, with some specialty faculties thrown into the combination like the Style Institute of Know-how, a medical faculty or two, an ophthalmology faculty, and so on. Both the pedagogical wants and the on-campus help capabilities throughout the system have been (and presumably still are) extremely numerous. There merely was not any present LMS at the time, with or without proprietary extensions, that would meet such a diverse set of wants across the system. We saw no indicators that this state of affairs was changing at tempo that was seen to the bare eye, and relatively few signs that it was even widely known as an issue.
To be trustworthy, I got here to the belief of the need fairly slowly myself, one dialog at a time. A few art historical past professors dragged me excitedly to Columbia College to see an open source picture annotation software, only to be dissatisfied once they found that the software was developed to show medical histology, which uses image annotation to teach in a completely totally different approach than is usually employed in artwork historical past courses. An astronomy professor at a group school on the far tip of Long Island, the place there was comparatively little mild pollution, needed to provide every astronomy scholar in SUNY distant access to his telescope if solely we might work out easy methods to get it to talk to the LMS. Anyone who has either taught a been an educational designer for a couple of wildly totally different topics has a leg up on this insight (and I had carried out each), besides, there are ranges of understanding. The art history/histology factor undoubtedly took me abruptly.
A colleague and I, in an effort to boost consciousness about the issue, wrote an article concerning the need for “tinkerable” studying environments in eLearn Journal. However there were only a few fashions at the time, even within the shopper world. The primary iPhone wasn’t launched till 2007. The first practically usable iPhone wasn’t launched till 2008. (And we now know that even Steve Jobs was secretly skeptical that apps on a telephone have been a good idea.) It is a signal of just how impoverished our world of examples was in January of 2006 that the perfect we might think of to point out what a world of learning apps might be like was Google Maps:
There are several totally different ways in which software might be designed for extensibility. One of the widespread is for developers to offer a set of software programming interfaces, or APIs, which different builders can use to hook into their own software program. For example, Blackboard offers a set of APIs for building extensions that they name “Building Blocks.” The corporate lists about 70 such blocks which were developed for Blackboard 6 over the a number of years that the product model has been in existence. That feels like lots, does not it? Then again, in the first 5 months after Google made the APIs out there for Google Maps, at the very least ten occasions that many extensions have been created for the new device. Google does not formally monitor the variety of extensions that folks create utilizing their APIs, but Mike Pegg, writer of the Google Maps Mania weblog, estimates that 800-900 English-language extensions, or “mash-ups,” with a “usable, polished Google Maps implementation” have been developed throughout that time—with a progress price persevering with at about 1,000 new purposes being developed each six months. In response to Pegg, “There are about 5 websites on the market that facilitate customers to create a map by taking out an account. These sites embrace wayfaring.com, communitywalk.com, mapbuilder.internet—each of those websites in all probability has tons of of maps for which only one key has been registered at Google.” (Google requires people who find themselves extending their software to register free of charge software program “keys.” Maybe because of this, Chris DiBona, Google’s own Open Source Program Supervisor, has heard estimates which might be much greater. “I’ve seen hypothesis that there are a whole lot or hundreds,” says DiBona, noting that estimates can differ extensively relying on how you rely.
However, even probably the most conservative estimate of Google Maps mash-ups is larger than the whole variety of extensions that exist for any mainstream LMS by an order of magnitude.
There seemed little hope for this type of progress any time in the foreseeable future. By early 2007, having did not persuade SUNY to use its institutional weight to push interoperability ahead, I had a brand new job working at Oracle and was representing them on a specification improvement committee on the IMS. It was exhausting, which I didn’t mind, however it was additionally depressing. There was little incentive for the small variety of LMS and SIS vendors who dominated specification improvement at the moment to do anything formidable. To the contrary, the market was so anemic that the dominant distributors had every cause to take care of their dominance by resisting interoperability. Each step forward represented an inner battle inside those corporations between the apparent advantage of a competitive moat and the much less apparent enlightened self-interest of doing something good for patrons. This is merely not the type of surroundings through which interoperability standards develop and thrive.
And but, even though it definitely didn’t feel like it, change was in the air.
Glaciers are sluggish, however they reshape the planet
For starters, there was the LMS, which was both a change agent in of itself and an indicator of deeper modifications in the institutions that have been adopting them. EDUCAUSE knowledge exhibits that the US LMS market turned saturated a while roughly around 2003. At the moment, Blackboard and WebCT had the key leads as #1 and #2, respectively. The dynamic for the subsequent 10 years was a seesaw, with new rivals rising and Blackboard shopping for and killing them off as quick because it might. Check out the period between 2003 and 2013 in Phil’s squid graph:1
It was completely vicious.
None of this might materially affect the requirements making process contained in the IMS until, first, Blackboard’s follow of regularly shopping for up market share ultimately failed (thus allowing an precise market with precise market pressures to type) and, second, till the administration staff that came up with this decidedly anti-competitive technique…er…chose to spend extra time with their respective families. (I am going to have more to say about Heckle and Jeckle and their lasting influence on market perceptions in a future submit.)
However the essential dynamic throughout this era is that clients stored making an attempt to go away Blackboard (even when they discovered themselves being reacquired shortly thereafter) and other corporations stored making an attempt to offer higher options. So despite the fact that we did not have a functioning, competitive market that would incentivize interoperability, and regardless that it definitely did not really feel like we had one, a number of the preconditions for one have been being established.
In the meantime on-line schooling progress was being pushed by no fewer than three totally different vectors. First, for-profit suppliers have been hitting their stride. By 2005, the University of Phoenix alone was at over 400,000 enrollments. Second, public access-oriented institutions, lots of which had been seeded a decade earlier with grants from the Sloane Basis, have been starting to present impressive progress as properly. A pair have been getting specific consideration. UMUC, for instance, might not have had over 400,000 online enrollments in 2005, but that they had nicely over 40,000, which is enough to get the eye of anyone in command of an access-oriented public university’s finances. More quietly, many smaller faculties have been having on-line success that have been proportional to their sizes and missions. For instance, once I arrived at SUNY in 2005, that they had a handful of group schools that had self-sustaining online diploma packages that supported each the missions and the finances of the campuses. Many extra have been providing particular person courses and partial levels so as to improve access for college kids. (Most of New York is rural, in any case.)
The third driver of on-line schooling, which is more tightly intertwined with the first two than most people understand, is that On-line Program Management corporations (OPMs) have been taking off. The early pioneers, like Deltak (now Wiley Schooling Providers), Embanet, Compass Schooling (now each subsumed into Pearson), and Orbis (just lately acquired by Grand Canyon University) had proved out the model. The second wave was coming. Educational Partnerships and 2Tor (now 2U) have been each founded in 2008. Altius Schooling got here in 2009. In 2010, Studying House (now also owned by Wiley) was founded.
Counting on-line enrollments is a notoriously slippery business, however this chart from the Babson survey is very suggestive and accurate enough for our function:
For those who’re a campus chief and thirty % of your students are taking no less than one on-line class, that turns into exhausting so that you can ignore. Uptime becomes much more essential. Quality of consumer experience becomes much more essential. Instructional affordances turn out to be much more necessary. Obviously, thirty % is a mean, and one that is extremely erratically distributed across segments. However it’s vital sufficient to be market-changing.
And the market did change. In a lot of methods, the most important one being that it turned an actual, functioning market (or a minimum of as close to at least one as we have gotten on this area).
When glaciers recede
Let’s revisit that second progress interval within the IMS graph—2008 to 2013—and speak about what was occurring on the earth throughout that interval. For starters, online continued its rocket journey. The for-profits peaked in 2010 at roughly 2 million enrollments (earlier than beginning their spectacular downward spiral shortly thereafter). Not-for-profits (and odd mostly-not hybrids) ramped up the competition. ASU launched its first on-line 4-year diploma in 2006. SNHU started a new online unit in 2009. WGU expanded into Indiana in 2010, which was the identical yr that Embanet merged with Compass Information and was promptly bought by Pearson. (Wiley acquired Deltak two years later.)
Once again, the extra online college students you have got, the less you are able to tolerate downtime, a poor consumer interface that drives down productivity, or generic course shells that make it exhausting to teach students what they need to study in the methods by which they need to study. Instructure was based in 2008. They emphasized a couple of distinctions from their rivals out of the gate. The first was their native multitentant cloud architecture. Decreased downtime? Examine. The second was a robust emphasis on usability. The large function that they touted which was their early runaway hit was Velocity Grader. Increased productiveness? Examine.
Instructure had discovered their updraft to offer them their hockey stick progress.
But additionally they emphasized that they have been going to be a learning platform. They weren’t going to construct out every device conceivable. As an alternative, they have been going build a platform and encourage others to build the specialized the instruments that academics and students need. And they might aggressively encourage the development and usage of standards to take action. On the one hand, this fit from a cultural perspective. Instructure was more like a Silicon Valley firm than its rivals, and platforms have been scorching in the Valley. Then again, it was nonetheless just a little weird for the schooling area. There still weren’t good interoperability standards for what they needed to do. There still hadn’t been an explosion of excellent learning instruments. That is a type of conditions where it is arduous to inform how much of their success was prescience and the way a lot of it was luck that greater ed caught up with their cultural inclination at that actual moment.
The very same yr that Brian Whitmer and Devlin Daley founded Instructure, Chuck Severence and Mark Alier have been mentoring Jordi Piguillem on a Google Summer time of Code venture that might grow to be the preliminary implementation of LTI. In 2010, the same yr that Instructure scored its first major win with the Utah Schooling Network, IMS International released the ultimate specification for LTI v1.0. All this time that the market had felt like it had been standing nonetheless, it had truly been iterating. We just hadn’t been experiencing the benefits of it. Chuck, who had been occupied with interoperability partially via his work on Sakai, had been tinkering. Students like Brian and Devlin, who had been annoyed with their LMS, had been tinkering. The IMS, which truly had a precursor specification before LTI, had been tinkering. While circumstances hadn’t develop into visible on the floor of the glacier, means down, a mile under, the topology of the land was altering.
In the meantime in Arizona, in 2009, the very first ASU+GSV summit was held. I admit that I have had author’s block relating to this specific convention the previous few years. It has gotten so massive that it’s exhausting to understand how to think about it, much less tips on how to sum it up. In 2009, it was an concept. What if a university and a company that facilitates start-ups (in a number of methods) received together to encourage ed tech corporations to work extra successfully with universities? That is my retrospective interpretation of the unique imaginative and prescient. I wasn’t at lots of those early conferences and I definitely wasn’t an insider. It was arduous for me, with my specific background, to know what to make of it then and even more durable now.
However something clicked for me this yr when it turned out that IMS LILI was held at the similar lodge that the ASU+GSV summit had been at a few months earlier. How does the IMS get to 523 product certifications and $eight million within the bank? A variety of things need to go proper for that to occur, however for starters, there should be 523 products to certify and plenty of corporations that may afford to pay certification fees. That financial system merely did not exist in 2008. With out it, there can be no updraft to experience and consequently no hockey stick progress. ASU+GSV’s phenomenal progress, and the ecosystem that it enabled, was one other main factor influenced what I saw at IMS LILI this month.
There’s numerous chicken-and-egg here. LTI made loads of this attainable, and the success LTI (and IMS International) have experienced wouldn’t have been potential without numerous this. The more durable you stare at the image, the extra difficult it seems to be. This is what “techniques considering” is all about. There is not a linear cause-and-effect story. There are a number of interacting suggestions loops. It’s a posh adaptive system, which signifies that it doesn’t respond in linear or predictable methods.
Replace: I obtained a word from Rob Abel noting that loads of the growth within the final leg came from an explosion of participation within the Okay12 area. That is good colour and in line with what I’ve seen in my last couple of LILI conference visits. It’s also in line with the rest of this evaluation. Okay12 benefitted from all the dynamics above—the maturation of the LMS market, the dynamics in larger schooling on-line that pushed toward SaaS and usefulness, the huge influx of venture funding, and so forth. All of those developments, plus the work inside IMS, made the Okay12 progress potential, whereas the dynamics inside Okay12 added another feedback loop to this complicated adaptive system.
However respond it finally did. We now have some semblance of a functioning market, and with its rise, blockers stopping the formation of a vibrant interoperability standards ecosystem of the sort we’ve immediately have largely fallen. Now we have now to deal with the blockers of the formation of the vibrant interoperability ecosystem that we’ll need tomorrow. Because will probably be qualitatively totally different. Tomorrow’s blockers aren’t market formation problems however fairly collaboration methodology issues. They are about creating significant studying learning analytics, which would require solving some depraved problems that may solely be tackled by way of close and nicely structured interdisciplinary work. That the majority undoubtedly consists of the requirements design course of itself.
After the glacier comes the flood
What I noticed at the IMS LILI this yr was, I feel, a milestone. The top of an period. Market pressures now favor interoperability. The identical corporations that have been probably the most immune to creating and implementing useful interoperability requirements in 2007 are among the most aggressive champions of interoperability immediately. This isn’t to say that foundational interoperability work is “over.” Far from it. Relatively, the circumstances lastly exist where it could transfer ahead because it should, nonetheless onerous however comparatively unimpeded by the distortions of a dysfunctional market.
That stated, the character and challenges of interoperability our sector shall be dealing with within the subsequent decade are basically totally different from the ones that we confronted within the final one. Up till now, we’ve primarily been involved with synchronizing administration-related bits throughout purposes. Which individuals are in this class? Are they college students or instructors? What grades did they get on which assignments? And the way much does each task rely toward the final course grade? These challenges are exhausting in all of the ways which are familiar to anybody who works on any type of generic knowledge interoperability questions.
However the next decade is goes to be about knowledge interoperability as it pertains to insight. Knowledge scientists assume this is nonetheless acquainted territory and are excited as a result of it retains them on the frontier of their own career. But this won’t be generic knowledge science, for several causes. (I will inform you proper now that a few of them disagree with me on this. Vehemently.) First, even in probably the most richly instrumented absolutely online environments that we have now right now, they’re extremely knowledge impoverished relative to what we have to make good inferences about educating and learning. For heaven’s sake, Amazon still recommends things that I’ve already purchased. If I simply bought a toaster oven final month, then how doubtless is it that I need to buy one other one now? And I buy the whole lot on Amazon. If they do not know enough to make good shopping for recommendations on shopper products, then there isn’t any method that our learning environments are going to have enough knowledge to make judgements which might be orders of magnitude extra refined.
Properly then, some answer, we’ll simply gather more knowledge! More extra more! We’ll acquire every thing! If we gather each bit of knowledge, then we will answer any question. (That may be a fairly shut paraphrase of what one of the IMS presenters stated in one of the handful of learning analytics talks I went to.)
No. You won’t gather “every part”—even if we ignore the apparent, obvious moral questions—because you don’t know what “every thing” is. Pc people, having lastly freed themselves from the shackles of SQL queries and knowledge marts, are understandably excited to use that newfound freedom to the necessary drawback area of studying. However it isn’t an excellent match, as a result of we don’t have a very good understanding of the essential cognitive processes involved in studying. As I wrote about (at length) in a earlier submit, we’ve got to employ a number of cutting-edge machine studying methods simply to get glimpses of learning processes even once we are immediately monitoring students’ mind exercise because these are extraordinarily complicated processes with a number of hidden variables. Making an attempt to tease out studying processes inside a scholar’s head based mostly on learning analytics from operating machine studying algorithms on LMS knowledge is just a little like making an attempt to watch the digestive processes of a flatworm on the bottom of the Marianas Trench based mostly on learning the wave patterns on the surface of the ocean. There are too many invisible mediating layers to only run a random forest algorithm on your knowledge lake—all of it sounds very organic, does not it?—and pop out new insights about how college students study.
That does not mean we should always just throw up our arms, by any means. On the contrary, IMS International has some extraordinarily good instruments close at hand for tackling this drawback. Nevertheless it does imply that they’re going to need to take a few of the stakeholder engagement methods they’ve been working at diligently to the subsequent degree, to the point the place the standards-making process itself might evolve over time.
There is a wonderful knowledge and processing useful resource that the training analytics people have yet to assume deeply about the best way to leverage, so far as I can inform from the convention. The computational energy is spectacular (and impressively parallel). It’s the collective intelligence of educators and studying scientists. Because there are too many confounds to creating helpful direct inferences from the info, instructional inferencing must be theory-driven. You want to begin with no less than some concept of what is perhaps happening contained in the learner’s head. One that may be both supported or disproven based mostly on proof. And you want to know what that proof may appear to be. Should you can spell all that out, then you can start doing fascinating issues with learning analytics, including machine learning. There’s room for studying science, knowledge science, and on-the-ground educating expertise on the table. The truth is, you want all those sorts of expertise. But the people with those respective sorts of know-how need to be able to speak to one another and work collectively in the proper ways, which is admittedly exhausting.
The IMS has an impressive basis for this kind of work, because their Caliper specification seems to offer the idea for a wonderfully pretty lingua franca. To start with, its elementary construction is triples, which is identical primary concept as the original idea behind the semantic net. In case you’re not a pc individual and that is beginning to make your eye’s glaze over, don’t be concerned, as a result of this is plain English. Three-word sentences, in truth. Noun, verb, direct object. Scholar takes check. Query assesses learning goal. Scholar highlights sentence. Sentence discusses Impressionism.
IMS Caliper expresses learning analytics in statements that can simply be translated into three-word plain-English sentences. These sentences may be strung collectively into coherent paragraphs. Discover, for instance, how the last two instance sentences are associated. Three-word sentences in this format could be chained collectively to type longer ideas. New ideas. With this one, very simple grammatical structure, we’ve got a language that is generative within the linguistic sense. So long as you’ve words to put into these grammatical placeholders, you’ll be able to string thoughts collectively. Or “chain inferences,” to sling the lingo. And it turns out, unsurprisingly, that Caliper has a mechanism for defining these words in ways in which each humans and machines can understand them.
That needs to be the bridge. People have to know the utterances properly enough to find a way categorical their theories on the front finish and perceive whatever the machine is telling them it might have discovered on the again end. Machines have to know them particularly sufficient to have the ability to parse the sentences in their very own, literal, machine-y means. Theoretically, Caliper might be a super language to allow educators and pc scientists to discuss theories about the right way to better help students as well as easy methods to check these theories together.
The problem is that the IMS group, at the very least based mostly on what I saw in the periods I attended, shouldn’t be using the specification as an interdisciplinary communication device on this method yet. What I noticed occurring as an alternative was numerous very earnest knowledge scientists pumping as a lot Caliper knowledge as the can into their knowledge lakes. They come to the conference, give a talk and, to their credit score, shrug their shoulders and admit that they actually do not know what to do with those knowledge but. However then they go residence and construct greater pipes, as a result of that’s their job. That’s what they do.
It isn’t their fault. I have been buddies with some of these people for a very long time indeed. There are good individuals right here. But in the event you work within the IT department, and you are not a studying scientist or a classroom educator, and the school are someplace between dismissive and disdainful of the thought of speaking to you about working together to improve educating and studying, then what are you able to do? You do what you understand how to do and hope that things will change for the better over time.
It isn’t the IMS’s fault both. The convention I attended was referred to as the IMS Learning Influence Management Institute. That is not a new identify. Caliper has board that helps information its course. That board consists of educators who’re the sort of advocates that I want to see on such a physique. They are productive irritants in the absolute best approach. However that’s not enough anymore. This is just a actually exhausting drawback. It is the challenge of the subsequent decade. To satisfy it, we have to do extra than simply make certain the proper individuals are within the room collectively. We have to develop new ways of working collectively. New roles, methodologies, methods of talking with each other, and ways of seeing the world.
I’ll preview a little bit of a submit that I’ve in my queue for…I am unsure when, however some time quickly…by mentioning “studying engineering.” This term has gotten a whole lot of buzz these days, together with some criticism. I’ll be writing up my own take on it, however for now I am going to say that one purpose I feel the term is gaining some foreign money is that it represents a set of expertise for being a mediator in the type of collaboration that I am describing here.
Because it seems, it was coined by Nobel prize-winning polymath and Carnegie Mellon luminary Herb Simon, after whom Carnegie Mellon College’s Simon Initiative was named. And, because it additionally turns out, the Simon Initiative hosted this yr’s EEP summit and made some news within the process by contributing $100 million value of open source software program that they use of their analysis and pratice of…look forward to it…learning engineering.
This is a slide that they used of their speak explaining what the heck learning engineering is and what they’re doing when they are doing it:
(By the best way, the videos of all talks from the summit might be posted on-line, as promised. Please be affected person a bit longer.)
This submit has already run long, so somewhat than unpacking the slide, I am going to depart you with a query or two. Take into consideration this graphic as representing a data-informed steady improvement methodology involving a number of individuals with multiple varieties of expertise. What would that methodology have to seem like? Who would have to be at the table, what sorts of conversations would they should have, and the way would they need to work together?
I’m not suggesting that “studying engineering” is a magical conjuring phrase. However I’m suggesting that we’d like new approaches, new competencies, and certain a brand new position or two if we’re going to get to the subsequent updraft.
- By the best way, if you have not subscribed to Phil’s new weblog but, you then really, really ought to. Like, right now. I am going to wait. [↩]