JavaScript and its role in Artificial Intelligence, AR, and VR

Interviewed by Christophe Limpalair on 08/09/2016

Eric is a super smart guy full of ideas and experience, so in this episode I pick his brain on Artificial Intelligence, Augmented Reality and Virtual Reality, and how JavaScript plays a role in those topics.

I also ask him about hiring, from both sides of the table: companies hiring, and candidates looking to get hired. Just getting started in the industry with no experience? Don't know how to judge a candidate's skills? We've got some tips for you.

Not tracking JavaScript errors in production? This is going to change your life. Check out Rollbar, the full-stack error tracking solution that I recommend, and get 90 days free ($87 value).
Get your projects running in the cloud in no time, even if you have no infrastructure experience, with a free month on a DigitalOcean droplet (code: SCALEYOURCODE).

Downloads

Interview Snippets

Eric, welcome back. Since it has been a year, what have you been doing?


2:04


I've been putting out more JavaScript courses, playing with new technologies, and I got really deep into React and Redux. Been having a lot of fun with those.


What do you thing about all those changes and different tools that have come out?


2:24


Redux introduces a more functional approach to JavaScript state management, which is pretty amazing. I've enjoyed building it into my projects, and it has really simplified things like unit testing for your app's state management.


I really appreciate the simplicity of Reducers in Redux. If you're not familiar with Redux, the reducers are regular reducer functions for the purposes of app state management.


Using reducers for state management has been really amazing and has totally transformed the way I think about building apps, so I've been really happy with it.


So there have been some really useful transformations.


3:40


Definitely! I've also played with Angular 2 just a little and I've been playing with TypeScript. I have mixed feelings about both of those so far. Angular 2, compared to React and Redux, feels like it has a lot more overhead and not a lot of benefit. Like writing unit tests for the views in Angular 2 is much more complicated than writing unit tests for things like Pure Components in React.


Maybe as I use it more I'll start to "feel" it a little more.


In TypeScript, I really like the IDE(Integrated development environment) type hinting that it gives you, and I like the Type Inference capability so you don't have to manually annotate everything, which is fantastic ... an amazing feature. I like that more than the type annotations available with Tern.js, but the problem is that since it needs to see how the types flow through the program, sometimes it will infer types that are a little too strict and you have to go in and manually loosen up the type annotations. Sometimes, it's really hard to do that especially if you use any kind of complicated functional programming techniques, which I tend to do once in a while.


Then you start to wish for features like higher-kinded types I wrote an article article (The Shocking Secret About Static Types), but it was about how
people expected static types (With static typed languages, variable types are checked at compile time) to give you really strong protections against bugs. I've looked at several studies, and it turns out that it doesn't really reduce the overall density of bugs that escape into production. It's a weak effect. It's much smaller than I would have expected from it, especially when you compare it to things like TDD (test driven development) where you write tests for the functionality of the software before you implement the software.


When you get TDD and you get good test coverage, the number of bugs that escape into production is reduced between 40 and 80 percent, The number for static typing is closer to around 3 percent. When I feel like my productivity is both boosted a little bit because of the cognitive load relaxation, like I don't have to remember all the interfaces because I can just start typing the function call and it tells me the interface for it, that's really wonderful, but when I have to go in and manually re-annotate the types because the inference is wrong, especially when it gets into anything like polymorphism or higher-kinded typing situation, that stuff slows me down and gets in my way almost as much as it helps. So, I'm kind of on the fence.


I really like type and interface annotations for documentation purposes and to help the developer know how to use an interface, but when it comes down to,"Should this thing be throwing errors at me and preventing me from doing my work?" I'm like, "Not really."


Haskell has a really great type system and I never felt like that one got in my way. It just depends what the type system is and how it works. There have been some interesting developments in the past year.


You write a lot of blog posts on JavaScript and other topics. One great example is an AI post that takes a look at Neurons that you came up with. How do you come up with all these ideas for blog posts?


9:28


I get a ton of questions from people learning about JavaScript and new technologies, and sometimes those influence what I write about. Sometimes, I'm just working on something and I come up with an insight, and I think, " I should share this with everybody, with the greater world, because there's some learning worth sharing."


And I write for my students. There are certain areas that I'm covering because it needs to fill in some knowledge gaps a lot of students should be learning but aren't.


I don't remember how that AI post popped into my head, but I think I might have been thinking about all the applications for augmented reality and how much AI is needed to pull that off and make it work well. I think I wanted to fill in some knowledge gaps about how these technologies work.


You could just go in and download a library that does some pattern recognition for you and you could skip trying to build it yourself, but I wanted to get people thinking about how the human brain works and how we can mimic real intelligence using machines. A lot of the current algorithms don't really do that very well. They do it to a very limited extent.


They have to be very specifically coded and trained for a particular task. For instance, if you want to do image recognition, you use some neural network and train it on how to recognize cats or people's faces. Things like the Instagram filters work by recognizing facial features and figuring out where your eyes are so they can put on funky shades and things like that. That's all using artificial intelligence.


So, I was thinking that a lot of people will be applying this to a lot of different apps and a lot of things like self driving cars. There's a really big push now for that. There will be a half million self driving cars on the road in the next four years or at least added to fleets like Uber. Uber is planning on half a million alone. There are other companies producing self driving car fleets. That's going to be really big in the next 20 years. It's going to play a huge role in our lives.


I just wanted to get people thinking about how all this stuff works, how the brain works and how we can make computers think as well as people think and in certain ways. They already think better than people do on things like how to add numbers together. What we need to get computers better at is the general ability to learn anything.


When I think about all the changes since last year, I felt we should talk about some of the big advances in Artificial Intelligence (AI) as well as augmented and virtual reality. You have written blog posts about it on Medium.


One post is called, "How To Build a Neuron: Exploring Ai in JavaScript" Pt 1 and 2.


You started that post by saying "Years ago, I was working on a project that needed to be adaptive. Essentially, the software needed to learn and get better at a frequently repeated task over time." Is it actually possible to have code that learns from its environments and from things it has already done repeatedly? How does that work?


13:06


Of course it's possible because we see it everyday. Anytime Google or Google photos or Facebook automatically tag a photo for you, it's because software has learned to recognize people's features and specific people. So it knows how to tag them when it sees them again.


So you see this all around you. Augmented reality is completely dependent on this because it has to be good at pattern recognition so it can detect edges, surfaces, walls and floors; so it can position things correctly in your field of view and in your reality. This stuff is clearly possible because people are doing it.


The specific software I was writing is trying to figure out what data is most likely to be needed for the next request for a specific person. So, it could do pattern detection based on the previous state of the app, what is the next state of the app when it has that previous state, and so it can learn to predict the user's next needs in terms of which data needs to be sent next.


In that way, it can pre queue that data and have it ready to send and it can reduce the amount of data that needs to be cached in memory. In 2000, I was able to do very limited, very very simple pattern recognition back then that didn't require huge resources. Now it's much more impressive. They've got augmented reality apps that let you point your phone at things and it will speak out loud what you're looking at.


They're using that for things to help blind people in understanding what's around them in the environment. It definitely is possible and is being used a lot.


It's interesting to understand that it doesn't actually manipulate the code. It's not re-programming itself. Instead, what you have is a network of neurons. It's like your brain has a bunch of cells that are all connected. It's like this great big network in your brain with hundreds of trillions of connections ... in your brain. We can't do that yet in computers ... yet, but we're getting there really soon. You're not going to emulate that on your Macbook anytime soon. It's this network of interconnected things.


Neurons are really simple. All they do is take inputs from a bunch of other neurons ... around a hundred in some instances ... much much fewer in a lot of neural networks. There are a lot of simple neural networks that are no where near the complexity of the human brain that work with much fewer connections. They do their simple pattern recognition tasks okay. The human brain can have potentially hundreds of thousands of connections between neurons which is a really complex network.


What happens is that each of those inputs gets weighted depending on whether or not the input neuron and the neuron that's weighing the inputs fire together frequently. The neuron listens to all the connections to it, and when a lot of other neurons, a lot of other inputs are firing at the same time, then that neuron fires too.


As soon as a number of inputs reaches a certain threshold, it fires what's called an action potential and that gets delivered along the axon to a bunch of other neurons. So the axon is the output and dendrites are the inputs. So let's install the dendrites and when there's enough signal, it fires the signal down its axons to other neurons.


That's basically what a neuron is and how it works. The way neural networks learn is that they weight those inputs. For instance, if two neurons tend to agree a lot, that neuron has a lot more weight. So the weight gets increased over time. If they tend to disagree a lot, the weight gets decreased over time, so when this neuron is firing, this other neuron doesn't believe it.


It's like, "I don't need to fire when this one's going." So it might actually create an attenuating effect on the input. So, instead of amplifying the sum of the signal, it subtracts from it. So, that's the basic way neurons work. The way a lot of neural network algorithms work is that they take the result you're after and when the neuron output doesn't match that result, it creates an algorithm to calculate the difference, and tries to weight the inputs accordingly.


There are lots of different algorithms used to try to do that training. One of them is a regression algorithm. There are a bunch of other algorithms for neural networks that are really interesting. Some of them that instead of trying to weight the inputs, they just rewire the network completely. Those are usually based on genetic programming, so there's an algorithm called Neat(NeuroEvolution of Augmenting Topologies) that uses genetic programming. There's another one called HyperNeat (specialized to evolve large scale structures and can learn the "Deep" parts of the neural network) that tries to balance the structure of the network a little more than the Neat Algorithm. Comparison of Neat and HyperNeat on a strategic decision


There are a bunch of other statistical models. I could go on and on about the ways that people have tried to emulate human intelligence in machines.


Hidden Markov Models(HMM)(especially known for their application in temporal pattern recognition such as speech, handwriting, gesture recognition, part-of-speech tagging, musical score following, partial discharges and bioinformatics.Source) is a really popular solution that has been used a lot in audio recognition. It can listen to speaking and turn it into sentences and text and try to parse what you're saying. I think they've also used that a lot in optical character recognition so handwriting, scanning stuff, and things like that.


Three questions based on everything you just said...Question 1: How did you learn all this about neurons?


21:28


That first AI thing I started working on about a decade and a half ago really sparked my interest in it. When I realized how powerful it could be and how I could use it in so many different applications, and how I could harness that in different ways to do different impressive things that I thought were impossible with computers, that really sparked my interest in it.


I wanted to understand how the human brain works, so I got into neuroscience studies a little bit. That got me interested in trying to emulate how real neurons work on computers. It's only been recently that we've had the computational power to really start emulating the complexity of how real neurons work in a meaningful way.


Up until now, we've had to rely on the math techniques and use a bunch of training inputs and do a bunch of statistical modeling and regressions, and it was nowhere near realtime. We had not had a good solution for realtime machine learning up until recently.


Now, our computers are much, much better and now I'm starting to think that maybe we can model neurons a little more closely. That series of posts was not about how to model it with math and statistics as much as, "How can we simulate the functionality of real neurons and what can that get us?" I don't know the answer to that yet because I don't think it's been done really well. There are a couple of programs out there that do it a little bit, but a lot of them are focused on the deep science of learning how the brain works so it's trying to recreate the real networks of a real brain and map that out visually to help scientists understand how it fits together.


They're not trying to use it for real practical applications of machine learning that I'm aware of. If you know different, comment on this podcast and I will look through your answers later because I'm all ears. I want to learn as much about that as possible.


Question 2: Does it make sense to try to emulate what neurons actually do considering they are organic material vs. nonorganic machines? Is it possible that we can ever get close enough that it makes any sense to even try to do it?


24:18


There's a whole body of scientific research. I kind of have to put scientific in bunny ears because it's so outrageous, so outlandish, sci fi sounding that if you say science, it sounds like you're a pseudo science quack. But there's a whole body of research that's really simulating the human brain's functionality down to the molecular level.


They're taking really tiny microscopic scans of little tiny slivers of cross sections of mice brains and stuff like that. They're doing these scans. They are also doing this with human brains but not using live specimens. Obviously, they're not killing people and slicing up their brains and scanning them.


They're not doing that yet. Watch out....


This is a whole body of research where they're trying to emulate real brains. There's an interesting project that's emulating the entire physiology of a real worm including all its neurons. This worm has neurons all over the place; not like a centralized brain, but it has 302 neurons total. Super simple to emulate, you would think. Right?


But because it's emulating every molecule of that organism, it takes a lot of computation to do that emulation. I can't remember the name of that project, but it's the C. elegans worm.


Artificial Brains OpenWorm projectWe've put a worm's brain in a Lego robot's body


---------There's a video of the robot in this article---------


26:29


In Europe, there's a project trying to emulate the human brain, and there's another project in the US trying to do the same. So, there are three major projects trying to emulate the brain at the molecular level. Some people are seeing this body and path of research as a way to achieve immortality. If you can increase the scanning technology enough to do a molecular level scan of the entire human brain without killing somebody, you can record all their memories and all their thought patterns which are hardwired into the brain. That's how the brain works. It adapts its physical network over time.


So if you can encode all the physical characteristics of the brain, and you can emulate a human brain on a computer, then that computer is thinking every bit as much realism as you're thinking.


So, it can think just like you can think. A lot of people see that as a potential for achieving immortality.


It sounds all crazy and quacky, pseudo science and science fiction, right? This technology is being developed but we're nowhere near that. We're taking the first baby steps into it, and currently, it is still science fiction. What is science fact is that we can emulate small networks with hundreds of neurons. The idea of emulating the entire human brain with all its hundred trillion connections is nowhere near being satisfied.


I must say this ... seven years into the Human Genome Project (HGP) (the mapping on computers of the Human Genome Project), we were something like 2 percent into it. Then, seven years later it was finished.


So, you have to take the exponential increase of our technological capabilities into account when you think, "Maybe we are .2 percent into the emulation of the human brain," but maybe in 20 years, it'll be done. Just because it has taken all that time to get where we are doesn't mean it's going to take us all that much longer.


It's not a linear process; It's an exponential process. It seems like it's a faraway vision, like "maybe in a hundred years we'll be there." If you're modeling linearly, sure, it'll be a hundred years, but that's not how technology works.



Honestly, when it comes to legislation, any time legislation has attempted to block the mark of technology, it's the legislation that got killed, not the technology. That's true universally. They tried to block the automobile and look how that worked out. They tried to block the internet. They tried to block music sharing and the music industry almost died. So, trying to legislate the existence of a technology away doesn't work.


Chris: They'll probably slow it down a little, but not completely stop it.


If you look at the timeline of history, they can slow it down a tiny bit, but it's a tiny blip, and it'll get back on track.


Chris: It's super interesting, and I know they're looking at creating these meshes that can go inside the brain and fit with the organic material. I don't know how far along they are with that.


That's true. They're getting into that. One of the really interesting things about the human brain and how neurons work, is that they don't care where their inputs come from. So, for instance, if you were blind, they can actually put a patch on your back that'll take input from a camera and map that input to electrical stimulation on your back or physical stimulation on your back, and your brain will start to detect the patterns, and you will see through your back.


The neuroplasticity of the brain is truly remarkable. What they are talking about when they put a patch in your brain is that they're doing little electrical stimulations on the neurons. It doesn't matter where they put that, the brain will start to sense patterns from that and start to be conscious and aware of what those patterns are, and it will become like another sense for you.


There are people who are blind or color blind. One guy mounted a camera to his head because he couldn't see colors, and he patched it into the part of the brain that connects your ears and wires up the audio inputs so he hears colors. It's pretty crazy stuff.


How do you hear colors?


32:50


That's a good question, and if anyone really understood fully how it works, it would be a major advancement in our understanding of the brain. Neil Harbisson


What role does JavaScript play in all of this?


33:12


The interesting thing about JavaScript is that it's ubiquitous. It exists on just about every device. If you want to enhance the capabilities of almost any given device, the most obvious way to do it is to write some code in JavaScript.


That said, it's really hard to emulate real neurons in JavaScript because it is not very good at timing. Real neural plasticity is timing dependent. There's a 20 millisecond window between the firing of two neurons. If they fire within 20 milliseconds of each other, there will be neural plasticity response. If it's within 20 milliseconds after, the network is strengthened. If the previous neuron fires later than the other neuron, then the neural plasticity is weakened.


Cells that fire together, wire together, and cells that fire apart wire apart. In animal brains, this is timing dependent. You have to know if it happened within 20 milliseconds and what was happening with the signals inside that neuron.


In order to simulate that in JavaScript, you either have to build a fake clock that gives you 20 milliseconds of leeway because JavaScript's timing is so choppy and unreliable or you have to break out of JavaScript and use other APIs that are built in C++ and stuff like that.


We just happen to have one of those built into the web platform called Web Audio APIwhich fires 44,000 times per second. It's much more accurate than setTimeout and things like that. There actually are ways to do it in JavaScript, but I wouldn't say it is the ideal language to do it in.


I choose JavaScript because it is the language I use the most and am the most familiar with, and if I'm going to learn something, I'd rather use JavaScript so I'm not learning two different things at the once.


We just talked about a little trip into the future. Let's come back to today and talk about what's actually available to us with augmented reality and what we can actually do there. A perfect example of this is Pokémon GO. Everyone's going crazy over it. It's opening floodgates to other possibilities. I'm sure there are a lot of people trying to find ways to monetize that right now. What role does JavaScript play in that?


36:40


Building an app like Pokémon GO is much, much easier than it sounds. The basic functionality of it is really simple. They don't actually do a lot of environment detection with the placement of the Pokémon. Sometimes, it seems like a Pokémon is hovering in mid air, not sitting on any particular surface. So, the current state of Pokémon would be super easy to emulate in JavaScript.


It's the location APIs that are the most complicated parts, and figuring out how to distribute things like the PokéStops and the gyms, which are much harder problems than displaying the mixed reality that Pokémon GO uses.


Their augmented reality is really just overlaying a simple image on top of a background, a video background and that's really easy to do in JavaScript.


I think those kinds of apps are going to be really popular over the next few years, especially after the explosion of Pokémon GO. When some new technology comes along, usually that the technology is there for a long time and it's ready for a long time.


Ingress is basically Pokémon GO. It's the same game essentially, but it's using different intellectual property. You can actually use Ingress to play Pokémon GO. You can run them side by side. All the PokéStops are the same as the portals in Ingress and they're literally the same game pretty much.


It takes a killer app, one that's wildly popular to break people out of their traditional mode of thinking and make them realize that there's something to this. Now that Pokémon GO is making 10 million dollars a day, everybody is going to be making augmented reality apps for the cell phone.


I was starting to worry because the augmented reality technology has been around for so long that it has literally been possible to make a game like Pokémon GO for about 10 years and it took that long for somebody to do it.


Now that people see the potential, I think you're going to see a ton of games trying to be the next Pokémon GO. I think what a lot of people don't realize is what made Pokémon GO successful wasn't the technology, it was the IP (intellectual property); the immense popularity of the Pokémon brand. Without the Pokémon name, Pokémon GO is a blip on the radar, another Ingress (not to discount Ingress). It was a really cool game and JavaScript is perfectly fine for doing something like that.


So how can you get started if you want to create something like that for people who are listening? Do you know of tools or frameworks that are geared toward creating these types of AR (Augmented Reality) games?


40:10


If you want to play with AR, don't play with the fake Pokémon GO AR. THat's not the real AR. Pokémon GO is cool, but if you think Pokémon GO is AR, you don't know Augmented Reality. The real AR is really exciting. It's going to transform our lives. In a decade or so, we're going to wonder how we ever got along without it, the way we wonder how we got along without our cell phones. I remember a time before cell phones and I didn't realize the difference it could make until I actually had one. It's a part of me now. I don't want to go anywhere without my cell phone. When cell phones were new, people made fun of them. Now, something like 2 billion people have cell phones.


Twenty years from now, AR will be as ubiquitous as cell phones; potentially a lot more transformational. Basically, you will be wearing the internet on your face


Chris: There are so many things about to change. All these changes are very exciting. We could talk about it all day. I do want to ask you a few questions about hiring. You have a lot of opinions about it, particularly for JavaScript candidates.


I'm finding out that hiring is not only hard for the candidates themselves, but especially difficult for companies trying to find the talent. I have questions for both sides of the coin. Let's start with candidates, maybe they just graduated from high school or college and they don't have the years of experience companies are asking for. How do they find jobs?


43:10


Build stuff! Build stuff! Actually, I have courses online. I have some brand new student projects. They're just basically app ideas for students looking for stuff to build. They are assignments. A student can clone the project on GitHub and build an implementation of that app. Then, they have some code to share with potential employers.


They can show that they understand:



  • closures

  • objects

  • how to structure an application


That can help them get their foot in the door.


The thing that got me my first really good programming job was an open source project I built. It was a form validator library. Having that good example of some good code, online that is useful for real people.


People who are complaining, "How do you get experience when nobody will hire you if you don't have experience?" Well, build something yourself, turn it into something good. Write good code that's well structured. Make it open source. Share it with potential employers to say, "See, I have proof that I can code." You're not just a guy with some key words on a resume anymore; you've got real projects they can look at. That's how you get your foot in the door


Chris: I would even say it doesn't have to be a big project or something that doesn't exist already. You don't have to reinvent things. You can just build something that you find fun and interesting. Even if there's another version of it out there, it doesn't matter. You're still getting that experience. You're building it differently ... as long as you're not copying and pasting. If you're actually building it from the ground up, it's a tremendous learning experience and great for your resume.


Sure, if you can avoid copying and pasting from other examples, you could even put your own little spin on a "To-Do" app, as long as you do something that shows off your creativity and your abilities in a way that isn't totally cut and paste from somebody else's project.


What if someone's switching from a different industry? They've been in the workforce for a while and they realize they don't like that industry or they're trying something different? Is it the same thing? Work on different projects even if they don't have time and they've got bills to pay?


45:45


Do it in your off hours, at night, weekends. Just do it! You don't really have any other alternative. Either you have the experience to get hired on a development team or you don't, and you're not qualified.


Development teams aren't hiring people even for junior roles who have never touched a line of code before. They're hiring people who have coded, who have built something that they can look at. You can't just say, "I've read a JavaScript book," and expect to go out and get a job. You have to get some practice under your belt and jump online on things like Free Code Camp. Come to ericelliottjs.com and do some practice projects. Get those under your belt and then start looking for jobs.


Initially, your first programming role is going to be a junior role, even if you write something on your own, unless you turn that something into a successful business and you can say, "During this 2 to 3 year period, I made a successful business off the software that I built." That counts as experience.


If you have a couple of projects on GitHub, that's not experience and you're going to go into a junior role.


What about using recruiters?


47:35


Don't! Honestly, I think that recruiters are a plague on the industry, and I'm not trying to hurt anyone's feelings. I'm sure they are wonderful people and I know they're trying to make a living, but recruiters do more harm than good in my opinion because all the senior developers like me ... it's basically like a form of legal harassment, legal stalking. Only instead of one stalker, I have a thousand.


It's not cool. You're creating problems for people instead of solutions. If you really want to create solutions, transform your role. Instead of being a recruiter, learn about the technology that the company is hiring for, and turn yourself into a technology evangelist, and let the candidates come to you instead of the other way around.


There are plenty of candidates looking for jobs. There are hundreds of thousands of developers out there who potentially might want to make a career change. Let them come to you. Do something really, really cool, put some content out there, and let the candidates find you instead of hounding them.


It really creates a big problem for the industry, and most recruiters don't bother learning about the technology. They get some key words and do some searches, and then they throw stuff at the wall until something sticks.


They don't understand how to screen candidates, so they send candidates who are not good for the roles to the companies that are looking to hire. That wastes a tremendous amount of that company's time; whereas, an agent who has partnered with real techies and prescreened their candidate, and been very selective about who they take on can send some high quality people. I happen to know a really good agent (@JS_Cheerleader on twitter).


Instead of using a recruiter, you can shop around. There are pipelines producing new talent, so ....ok, I have to backtrack.


Right now, the only way for junior developers to get a foot in the door is for them to go to general purpose coding agencies and get hired to build cheap apps ... cheap first versions of apps or cheap marketing landing pages for companies that just want to do some marketing for their products.


They're really crappy developers. Their teams are lousy developers, all newbies, all pretty green, and they're all building substandard stuff, with a few exceptions. There are a few good agencies out there. In general, the way a lot of developers get their foot in the door is they go to agencies that tend to hire a lot more junior developers than big tech companies do.


That is really problematic because the companies that are building the real production apps, not just the version one prototype apps, tend to exclusively concentrate on hiring senior level developers. The problem with that is that there are not enough senior level developers to fill all the roles that are opening up for them.


That shortfall is widening and by 2020 we may have a million unfilled developer roles out there because they're looking for people with much more experience than the current job pool has. It's really hard to find really good, senior level developers on the market. One thing a company can do to change that situation is for every 2 to 3 senior developers you hire, hire a junior developer. Then have that junior developer apprentice with the senior level ones. Pair them up and have the senior developers mentor the junior developer.


If you do TDD (Test Driven Development) and Code Review and a lot of pair programming for your junior developers, what you will see is that that junior developer will come up very quickly to the level of the senior developers (within a year or two).


While they are learning, they are also providing fresh insights. The great thing about junior developers is that no one has ever told them that something is impossible, so they're willing to try anything and they bring fresh approaches and fresh insights that the senior developers are not going to think of.


The senior developers have built this track in their mind of how things are done and you just do it that way. That's how we ended up with MVC for 3 decades and then React.js came along and blew the doors open as a fresh approach. That's what junior developers can do for your team. They can blow the doors off your traditional way of doing things and help you see fresh approaches that senior developers aren't going to see.


So it's really important to have junior developers on your team, not just because they're easier to find and hire, but because they bring fresh insights. More diversity in your team is a really good thing.


As the company trying to find that talent, whether it's senior level or junior, how do you screen candidates?


53:35


It's very different depending on whether they are senior or junior developers. With a junior developer, you are not necessarily looking for what they know. You're looking for an eagerness to learn and evidence that they have learned a lot in a very short time.


**Ask them: **



  • What have you been doing to learn?

  • Do you have some projects on GitHub that I can look at?

  • Have you been on FreeCodeCamp?

  • Have you been on ericelliottjs, and have you been looking at all the eggheads tutorials?

  • How are getting all your information?"

  • What is it that makes you excited about learning?

  • Why do you want to learn this stuff?

  • Why do you want to get into the industry?


As you ask these questions, gauge their excitement level. Look at how much they've learned and in what time frame they learned it. If they have been developing for 7 years and they're still at the junior level, maybe that's not the one to hire. If they've been developing for 6 months and they're already building an impressive little app in React and Redux and they're already learning about functional programming, they already know about object composition and modularity. If they already have some sense of all these concepts within six months, hire that person right now. Don't let them walk out the door.


That's the person you want to hire; someone who is eager to learn and who is in constant learning mode. That's what development is today. It's not just applying what you knew yesterday to every new problem. It's learning what you don't know yet that can make the app even better tomorrow.


Chris: This is a really good note to close on. Thank you, Eric for coming back on the show and talking about some really interesting things. I'm going to have to go back to this episode and listen to everything we talked about. It was definitely very interesting and I learned a lot.


If people want to follow up with questions or check out your work, how can they do that?


They can go to ericelliottjs.com and check out my courses. I'll teach you a lot of stuff you don't know yet about JavaScript. I guarantee it. Regardless of how many years you have been programming, I can teach you something new about programming that you didn't know yesterday.


If you want to contact me, the best way to get my attention is through twitter @JS_Cheerleader. She can help me find people that need my attention the most. She's always very helpful ... definitely a great resource.
And check out my book, Programming JavaScript Applications I have a new one coming soon.


Is it still a work in progress? Can you tell us anything about it?


57:00


In progress, super top secret right now, but it's coming.


Chris: If you want to contact me, you can follow me @ScaleYourCode or contact me chris@scaleyourcode.com.


Don't forget to leave a comment to thank Eric for his time. Thank you for tuning in.



How did this interview help you?

If you learned anything from this interview, please thank our guest for their time.

Oh, and help your followers learn by clicking the Tweet button below :)