search for: Mexico vs France Olympics
10 results (0.797 seconds)
group by podcast
Wie oft wurde "Mexico vs France Olympics" in Episodentexten gefunden?

podcasts

Wenn Du in Zukunft über neue Folgen zum Suchbegriff Mexico vs France Olympics informiert werden möchtest, lege einfach einen Alert dafür an.

search results

     
     
  •  
    Futbol Americas: Was the USWNT overconfident vs. Sweden?
    2021-07-23 (duration 1h2m)
    [from description] ...yo Olympics. Is it time to panic for the USWNT? Then, the guys discuss Mexico’s dominant victory over France in their Olympic opener and what it means for their medal chances ...
    [from itunes:subtitle] Sebi and Herc recap the USWNT’s stunning loss to Sweden at the Tokyo Olympics. Is it time to panic for the USWNT?
    [from itunes:summary] ...yo Olympics. Is it time to panic for the USWNT? Then, the guys discuss Mexico’s dominant victory over France in their Olympic opener and what it means for their medal chances ...
  •  
     
  •  
     
    New podcast:location tag, interview with Sam Liang, CEO of Otter.AI, about live notes.
    2020-12-11 (duration 40m)
    [transcript]
    13:11 one podcasts are in France in Germany
  •  
     
  •  
    Lessons From 17 Years of Podcasting with Evo Terra
    2021-07-09 (duration 1h6m)
    [transcript]
    51:41 the Tour de France, it's never
     
    It's a Buzzcast Takeover!
    2021-06-04 (duration 43m)
    [transcript]
    32:21 we're gonna talk about Kong Vs.
     
    The Average Podcast Episode Gets This Many Downloads in the First 7 Days (feat. Tom Buck)
    2020-11-20 (duration 1h3m)
    [transcript]
    26:02 Mexico, and 2.6. In the
  •  
     
  •  
    Trusting yourself. Tim's story on walking out of hell from Long Haulers.
    2021-06-15 (duration 1h26m)
    [transcript]
    19:51 backyard Olympics that I
     
    Purpose driven. Joe Delagrave's story on turning obstacles into miracles.
    2021-05-18 (duration 1h13m)
    [transcript]
    41:58 During the Olympics and
    42:37 are parallel to the Olympics.
    42:27 you've been to the Olympics. Is
  •  
     
  •  
    All About Cycling - Blazing Saddle Sore
    2021-06-13 (duration 45m)
    [transcript]
    06:22 Olympics mountain bikes were
    10:30 Olympics to actually get more
    10:54 you watch the Olympics, your
     
    An Artists Life
    2021-05-16 (duration 42m)
    [transcript]
    28:35 Mexico City and then I think
    28:38 any follows Mexico How do I get
     
    Riding Motorbikes: Show Us Your Helmet
    2021-04-11 (duration 45m)
    [transcript]
    36:16 New Mexico or Wisconsin? A
    36:24 is a B's New Mexico C is
     
    Making The Most Of Your Life: Who Wants To Live Forever
    2021-03-28 (duration 48m)
    [transcript]
    32:19 leave Mexico and sit down to
     
    UK Music Festivals - Ramblin Middle Aged Man
    2021-02-21 (duration 38m)
    [transcript]
    11:19 France called hell fest. And I'd
     
     
    Bearded Villains - Interview With A Villain
    2021-02-07 (duration 34m)
    [transcript]
    26:19 Yoda vs. villains.
  •  
     
  •  
    59: Art Bell, Comedy Central Founder on The Belief That Changed Comedy
    2021-06-01 (duration 51m)
    [transcript]
    15:20 take part in the Olympics and
     
    55: Unmasking Leadership with Eddie Campa
    2021-05-04 (duration 1h0m)
    [transcript]
    03:46 United States and Mexico border.
    25:09 Albuquerque New Mexico has a
  •  
     
  •  
     
    Living Fast, Dying Young or The 27 Club
    2021-01-13 (duration 45m)
    [transcript]
    40:28 France
    40:52 France,
    40:58 France,
     
    Rendlesham Forest UFO/UAP Incident
    2021-01-06 (duration 42m)
    [transcript]
    19:19 Mexico
    02:08 France
    02:30 France.
     
    Yule Monsters and Good Saint Nick
    2020-12-16 (duration 55m)
    [transcript]
    44:42 France
    46:09 France
     
    Mercy Brown and Black Aggie
    2020-10-27 (duration 55m)
    [transcript]
    44:52 France
  •  
     
  •  
    Episode 4
    2020-09-20 (duration 26m)
    [transcript]
    24:47 Mexico.
  •  
     
  •  
    Open Source Production Grade Data Integration With Meltano
    2020-07-13
    [transcript]
    41:19 Yeah. So since in the beginning, we knew that while we wanted Medina to be a convention over configuration tool, where you know, most people would just be able to get started without having to tweak too much. We did recognize that not everyone would want to use every part of Madonna, like not that out of the seven letters in the word matano actually stands for model extract, load, transform, analyze notebook and orchestrate because that was kind of the wider into envision we had in mind at the time, but we knew that we were not going to be able to convince everyone to go and use all of Madonna at once. So architecturally Meltano starts with the concept of plugins and extractors and loaders and transformations but also Transformers Like DBT and orchestrators, like airflow to build out our own plugins. And ultimately your Meltano project with just a single source of truth for for your data production, your data pipelines has a Meltano YAML file which which connects to various plugins that you've plugged in, which are kind of just dependencies that point at either a specific pipe pipe package, or no, it gets repo URL that contains a Python package. So because of this plugin based approach, the difference in terms of Okay, we're going to focus only on ELT for the time being and lots of these other steps that have to do with analysis and note booking etc. Only really meant that we wouldn't stress those other plugins anymore because if you're using both download with only extract, load and transform plugins, even in the previous iteration, it would basically already be the exact ELT tool that we have today. So that plugin based approach means that it was very much pick and choose and you don't need to use all of it. You can use it as a simple cigarette. That's our secret weapon 30 runner, you can use it as a singer, you know, as a pipeline runner, if you also want to take BBT transformation into that. And you can use it as a system to kind of abstract away the orchestration layer, if you're, if you're comfortable only using pipelines consisting of E, L and T steps that just need to be run on a schedule. So actually, the original architecture of making this very much pick and choose and plugin based allowed us to pivot relatively easily to focus on only a specific part of that whole story. And someone using both I know today, if they don't dig deep into the documentation will never know that it can actually do a couple other things that we are now for the moment explicitly not stressing, but we have also not removed these things from both, either because if we do find a user who is you know, motivated and inspired enough like hey, it would be cool if I also did basic pointing to the Olympics. We want this developer through the site, this contributor to the site to start contributing into that direction and then making that part of it more powerful because we do see Dano very much evolving into that action word community takes us but it doesn't necessarily need to be exactly what I've had in mind from the beginning. And it's very likely that we will be you know, spending months and months or years and years just focusing on ELT but just like we saw with get lab there is power in allowing people to go beyond the standard functionality in office today and add some extra features that they want but it's very much up to the community to see where it goes and fortunately the plug in based architecture allows for that really easily and just as an example of the power of that right now a singer texts and targets to Meltano are just extract her in a loader plugins that happened to use the stinger runner. So hypothetically, theoretically, if another extractor remoting framework comes up that people started asking us to support or if an alternative to DBT becomes popular, it is doable to allow to to add a new transformer plug in type or a new extractor motor plug in type two metadata which will allow us to move in that direction. Because again, we want to be the glue between these different tools. More so than lock people into a specific set of tools. And the idea is very much that the metadata project is your data project where your data engineer, analytics engineer, analysts, etc worked from. And we want to be able to evolve with data teams as they decide to move to different tools over time. And what we've seen recently is that we started out with supporting specifically the airflow orchestrator, which means that if you are using mailto, and you want to start orchestrating, or in this case, you know, running on a schedule your pipelines, it's really easy to add airflow as the back end orchestrator implementation. But because this is also plugin based, it will it's relatively straightforward to add support for another orchestrator like prefect or Luigi so that again, it's up to individual data teams, what they prefer what they already have experience with or what they want to plug into that they already have deployed. And they'll Dano makes it really easy to specify the different tools central exists, consists of and how those how those are tied together and more so than if looking In any specific, any specific combination of tools and that architectural, you know, bathroom is very much what has allowed us to go easily as we do. And it's pretty crucial to the future that we see with Madonna, basically out living, the specific open source tools that are invoked today that people might come towards. So I think it's less likely that we'll ever move away from single steps and targets because obviously, we are also investing in making that ecosystem more, you know, having the ecosystem grow and empowering the community. But on the front of orchestration, you're already seeing that airflow is not necessarily losing popularity, but projects like prefect are being considered by a new teams over airflow because, of course, these tools also evolve with the data space, and hopefully metadata will be able to evolve with the data space as well.
     
    Data Collection And Management For Teaching Machines To Hear At Audio Analytic
    2020-06-30 (duration 57m)
    [transcript]
    37:08 Well, if I take the top piece, then Tom, if you if you relate back to the source data piece, so in terms of that feedback, so generally, because we've got quite, we've got the world's largest collection of data for this area, we have high degree of certainty with the models we're providing to the marketplace already. You know, we have large amounts of say 24, seven recordings, large amounts, environment recordings, and obviously large amounts of targets and environments. So we typically find that we're, we're, you know, pretty good in our guesses of what the performance will be for a new sound profile that we're producing. In terms of the sort of things we learn, going back to that example of things that you just can't predict. You know, I was using the example of the the bird in the south of France and the North American smoke alarm that that's, that is something beyond the wit of man to sit in a room. figured out, you're only going to get that sort of insight from the actual field deployments are technologies deployed in some, like 160 countries worldwide. So we've got a very good sense of the sort of problems that are faced on a worldwide scale. In terms of how that feeds back, obviously, it feeds back into, do we need more data in a certain area? But Tom, you're probably best placed to sort of pick back up that full loop back to the beginning of the pipeline, the data collection piece.
    25:33 So the data, the taxonomy is structured on what's called an Act to principle, which is why that sort of amps are often used by often injury often at the top level things so obviously, caused by humans caused by geography, if that makes sense and caused by biology. So and then it cascades down from there. The active principle is a fundamental one. It was a specific taxonomy principle. We came up with big Because obviously something needs to cause those sounds in the environment. So using that as a fundamental building block means that you're not going to go far wrong in terms of skipping your last question, tons of things that we've, I suppose, effectively learnt that we didn't know we were going to have to learn. One of my favorite examples of that is, is not realizing sometimes the sound will inspires for you. And sometimes it conspires somewhat against you. So there is a smoke alarm. That's I think it's the third or fourth most popular selling smoke alarm in North America. And it sounds identical to a bird species in the south of France. Now, I'm pretty sure that that that bird species hasn't evolved to mimic the smoke alarm. But that sort of then thing that is then presented to the machine learning engineers and saying, well, these things sound pretty much identical to humans. But you need to separate them out. Otherwise people are being told that their their houses, their smoke alarms are going off when in fact, it's just the bird that they keep in their living room to pre Dwayne happens to sound identical to this North American smoke language which they the engineers solved. But that those sorts of interesting quirks of I suppose fate, if you will, a fascinating to experience although do give Tom and the rest of the team sort of, I'm sure sleepless nights of worry as they try and figure out how to best collect the data and best separated out.
     
    The Benefits And Challenges Of Building A Data Trust
    2020-02-03 (duration 56m)
    [transcript]
    30:49 Yeah, this is so important. And it's really again, like it's in our DNA to care a lot about this and it's driven, it's actually driven our decisions largely around How we grow and who we work with. The the idea of the data trust is broadly applicable, obviously, as I mentioned earlier grew out of the intelligence community, which is an area that we're not working in. But even though the idea is broadly applicable in the soccer we've written is, I think, appropriate and a lot of different contexts. We've limited our work so far and the clients that we've taken on largely to the education to work domain, which is one that we understand very well. We have a lot of folks on staff with deep subject matter expertise, who have the ability to do look at the particular problems that are that the data Trust's are trying to solve, identify potential issues of bias, or some sort of other ethical issue that might arise and actually bring their own expertise to bear on it, or at least to be able to issue spot and kick questions that might arise up to the Governance Committee to deal with it because we've circumscribed the realm in which we're working with early clients. I think that that has given us a lot of comfort that we're working with these really clients very closely and We know what they're doing and why have the in house expertise to be able to spot potential issues as we grow as a company. And as we move beyond the education to work domain, I think this is it's a harder problem to solve unless you want to basically step up in every single domain that may exist if we were to move in to healthcare, for example, would be have to go out and hire folks who have been working in healthcare IT or an actual health care practice long enough to be able to, you know, be able to issue spot with the same sort of rigor that we're able to in the domain that we currently work in. And I think the goal is that as we move on as a company to that to that place, we're working across a bunch of different domains that we wouldn't have to necessarily but we probably would continue to offer that as part of our services offering in a lot of different scenarios. So it's a combination of making sure that we're staffed to handle the trickiest bits. Like if we, if we start working with health data, having somebody in house at least one person who is able to weigh in on those issues and spot them but then also as as we're building out the site were to be able to use tools like Macey eden vs. Macey and others to bring some machine learning to bear on this as well to be able to at least flag potential issues like, Oh, it looks like this is a gender field, are you? Are you? Are you accounting for the fact that gender can change for some individuals or that they're, you know, this shouldn't shouldn't be stored as a binary? Or Oh, hey, it looks like this is a social security number. Are you sure you want to publish this things like that can be flagged in an automated way? I don't think it's I don't think it's sufficient to just rely on automated flagging, which is part of why a governance structure exists in a data trust, you would hope that the decision to publish a particular data resource if it's being reviewed by by multiple parties who are contributing data to it that that review process would highlight a lot of these issues. But given that the data trust idea is new to a lot of folks, and the governance structures that we're setting up are still new. We do feel like it's incumbent upon us as as a vendor to keep our own human eyes on a lot of what's happening so that we While we're in the process of automating some of these, these ethical controls, we have highly trained individuals who are helping us guide us along
     
    Building The DataDog Platform For Processing Timeseries Data At Massive Scale
    2019-12-30 (duration 45m)
    [transcript]
    03:36 Yeah, it was like a pretty interesting time. Like there were not a lot of resources about Hadoop. And hive was hive was like kind of like first like tool easy to manage for like lots of engineers like because you basically write sequel and like in France, but it was like all only had to model one model, which wasn't super scale compared to yard and like lots of tricks around that.
     
    Automating Your Production Dataflows On Spark
    2019-11-04 (duration 48m)
    [transcript]
    14:10 Yeah, the the technology itself really works at a couple of different layers, you know that the infrastructure there, we've designed to run on all three clubs, that being Amazon, Azure and Google. And as a unified infrastructure layer, we run two Kubernetes clusters. One is for what we call our control plane. That is all of our micro services that operate at the metadata layer. It's about 15, maybe 20. microservices now, combination of node, go Lang and Scala services that mostly talk to your PC to each other and build a pretty cohesive model of what's going on in the system. And I can talk dive more into that. And then the data plane is the other Kubernetes infrastructure. That's a elastic Lee scaled on spot in France practical instances, that runs both coop spark for a lot of our spark infrastructure and also runs workers. These essentially auto scale go based workers that we use for a lot of processing that sits outside of spark where the shape and model of the work required fits better into a custom set a work that's run directly on Kubernetes, as opposed to in Spark. But both of those run inside of this elastic compute infrastructure,
     
     
    A High Performance Platform For The Full Big Data Lifecycle
    2019-08-19 (duration 1h13m)
    [transcript]
    51:08 So you're absolutely right, the community has apparently at least reached a plateau at psychological and HPC systems community, in number of people. Of course, it was the first to the open. So we have HVC for a very long time he was closed source, he was proprietary, and we didn't we at the time, we believed that he was so core to our competitive advantage that we couldn't afford to release it in any other way. When we realized that reality, the core advantage that we have is on one side data assets on the other side is the high level algorithms. We knew that the platform would be better sustained in the long Randy and sustainability is an important factor for the platform for us because the platform is so core to everything we do that we believe that making it open source and free, completely free as both a no just a freedom of speech, but also free beer. We we thought that that would be the way to ensure this long term sustainability and development and an expansion and innovation in the platform itself. But when we did that it was 2011. So it was a few years after Hadoop, Hadoop, if you remember, it started as part of another project around the web crawling and what called management, which eventually ended up It's a song top level Apache project in 2008, I believe. So it was already three or four and a half years after hundred was out there. And they're coming to us really large. So over time, we did gather a fairly active community. And today we have inactive a very technical, deeply technical community. That is that not just a helps with extending and expanding HPC, but also provides a VS use cases, sometimes interesting use cases of HPC and a and uses HPC in general and regular regularly. So he would it be system community continues to grow, the community seems to have reached a plateau. Now there are other communities out there, which also handle some of the data management aspects with their own platforms like spark I mentioned before, which seems to have a better performance profile than what Hello Cass. So it has been also gathered in active, active people in those communities. Well, I think open source is not a zero sum game where if a community grows, the other one will decrease and then eventually, the total number of people in the community will be the same across all of them. I think every new platform that introduces capabilities to open source communities and uses new ideas and and helps develops, apply innovation into those ideas is helping the overall community in general. So it's great to see communities like a spark community growing. And I think there's an opportunity, and many of the users in both communities are using both at some point for all of them to leverage what is that in the others. Surely, sometimes, the specific language using gold in the platforms, makes a little bit of a bit created a little bit of a barrier. Some of these communities are now just because of the way Java is potentially more common, that use Java instead of c++ and C. So you see that sometimes the people that are in one community who may be more versed in Java, feel uncomfortable going and trying to understand the code in the other platform that is coded in a different language.
    03:10 Oh, absolutely. So in Mexico, we solutions, we started with management, I say, our core competency back in the mid 90s. And as we go into risk management, one of the core assets when you are trying to assess risk, and predict outcomes is data. Even before people spoke about big data, we had a significant amount of data, mostly structured, semi structured data to but the vast majority structured. And we used to use the traditional platforms out there, whatever we could get our hands on. And again, this is old, back in the day before Hadoop. And before MapReduce was applied as a distributed paradigm for data management or anything like that. So databases, Sybase, Oracle, whatever was Microsoft SQL, data management platforms of initio information, whatever was available at the time. And certainly the biggest problem we had with a scalable, but was twofold one was the scalability, all of those solutions typically run in a single system. So there is a limit to how much bigger you can go vertically. And certainly, if you're trying to also consider cost affordability of the system. And that limit is that is much lower as well, right, there is a point where you go beyond what the commodity system is, and you start paying a premium price for whatever it is. So that was the first piece. So one of the one of the attempts of solving this problem was to split the data and use different systems but splitting the data, it creates also challenges around data integration, if you're trying to link data, surely you can take the traditional approach, which is you segment your data into tables. And you put those tables in different databases, and then use terms of the foreign key to join the data. But that's all good and dandy as long as you have a foreign key that is unique, handheld is reliable. And that's not the case with data that you acquire from the outside. If you didn't read the data, you can have that if you bring the data from the outside, you might have a record that says these records about john smith, and you might have another record that says this liquid Mr. john smith, but do you know for sure he does. Two records are about the same john smith. And that's, that's a Lincoln problem. And the only way that you can do Lincoln effectively is to put all the data together. And now you have a we have this particular issue where in order to scale, we need to segment the data, in order to be able to do what we need to do, we need to put the data in the same data lake as a dome team. Later, we used to call this data land, eventually we teach it term in the late 2000s. Because data lake become became more more well known. So at that point, the potential bats to overcome the challenge where well, we either split all of the data as we were before, and then we come up with some sort of meta system that will leverage all of these 3d data stores. And potentially, when when you're doing prolific linkage, you have problems that are in have the computational complexity always square or worse. So that means that we will be a significant price and performance but potentially can be done if you have enough time and your systems are big enough, and you have enough bandwidth in between the systems. But the complexity you're gaining from a programming standpoint is also quite significant. And
     
    Managing The Machine Learning Lifecycle
    2019-06-10 (duration 1h2m)
    [transcript]
    56:24 just training versus mentioned vs. And all that tech system caffeine stuff that's not relevant at all to Machine Learning Management. But yeah, I would be happy to elaborate on that, as well.
  •  
     
  •  
    A Flexible Open Source ERP Framework To Run Your Business
    2020-03-23 (duration 1h7m)
    [transcript]
    42:12 So far the project itself, the foundation rely on the fact that it will receive donations from people and hopefully we we are very lean, so we don't have a lot of costs. And with the few from the few donations that we have now, it It allows us to do two or servers, the trademarks that are paid and we have also some some money tools, organize some times, try to get earnings of developers or conferences. And for the project itself, as we are a free software project we, we really, really rely on the fact that People use or projects need to need it to, to exist for for their for their business. So b2c, for example, we have customers and they pay us to work on Triton. And there are other companies that that do that. So I'm in Spain, Argentina, Germany and so on France, of course, and thanks to them, we allowed to dedicate some time on the free software project. Although we are a small project, when you compare us to the Linux kernel or Firefox or I don't know, Django, we still can live and thanks to the dedication of some some developers that do the work on their free time and so on. Yeah, it's it's a bit tricky, but it works and economically, it works. Also, it We are not unique jobs but but it works
     
    Getting A Handle On Portable C Extensions With hpy
    2020-03-17 (duration 35m)
    [transcript]
    00:13 Hello, and welcome to podcast ordinate, the podcast about Python and the people who make it great. When you're ready to launch your next app or want to try a project you hear about on the show, you'll need somewhere to deploy it. So take a look at our friends over at linode. The 200 gigabit and private networking node balancers, a 40 gigabit public network fast object storage and a brand new managed Kubernetes platform all controlled by a convenient API, you've got everything you need to scale up. And for your tasks that need fast computation such as training machine learning models or running your ci CD pipelines. They've got dedicated CPU and GPU instances. Go to Python podcast.com slash linode. That's Li n o d today to get a $20 credit and launch a new server in under a minute. And don't forget to thank them for their continued support of this show. As a developer maintaining a stable flow is key to productivity. Don't let something as simple as the wrong function ruin your day. kite is the smartest completions engine available for Python featuring a machine learning model trained by the brightest stars of GitHub, featuring ranked suggestions sorted by relevance offering up to full lines of code in a program and co pilot that offers up the documentation you need right when you need it. Get it for FREE today at get kites calm with integrations for top editors, including atom VS code, pi, charm, spider, vim and sublime. And you listen to this show to learn and stay up to date with the ways that Python is being used, including the latest in machine learning and data analysis. For even more opportunities to meet listen and learn from your peers you don't want to miss out on this year's conference season. We have partnered with organizations such as O'Reilly Media chronium Global intelligence, od sc and data Council. Upcoming events include pi con us and Pittsburgh, go to Python podcast.com slash conferences to learn more about these and others events and take advantage of our partner discounts to save money when you register today. Your host, as usual is Tobias Macey. And today I'm interviewing Antonio Cooney about h py a project aiming to reimagine the C API for Python. So Antonio, can you start by introducing yourself?
     
    Python's Built In IDE Isn't Just Sitting IDLE
    2019-12-24 (duration 36m)
    [transcript]
    26:43 So I think it was more pronounced in the early years back in the 2000s. When installing many for many languages, you would need to install something like Visual Studio or big ice or install a build tool chain and interact Interested that once you had Python installed, which was relatively easily easy usually or you know, many systems had pre installed, you could just start get started. Similar to other what were called scripting languages like Perl, for example that were just there, you could always use them, which was very easy to pick them up and just start getting go get going. And on the other hand, those were usually scripting languages were usually just written in editors, Python having a ghuli ID, I think did make things much easier for beginners than, you know, opening vi or Emacs. Just In comparison, I think these days with the VS code and Sublime Text and many other powerful and widely available editors. And for Python, for example, pie chart was very widespread, but there are other great editors and IDs out there. I'm just not going to date of all because there's so many, I think the importance of idle and that has become reduced also in terms of interactive environment. I think at the time idol was really the idols shell at the time idols shell was really the only alternative to just running Python into command line for an interactive environment. But these days, we have ipython, and Jupiter and few others as well, that are really very, very good directive environments with a lot of features. So I think idols shell is big. So I think idols Shell has become much less used in recent years, just due to the widespread availability of great other tools.
    09:17 So I think one case when people move on from idle is when they were looking for more powerful editing abilities. For example, anyone who is a power user of an editor such as vi or Emacs or any major IDP, such as Visual Studio, or the newer ones like Sublime Text VS code, they know those have lots of powerful editing features that allow you to just work faster, make changes to code much faster. The idol is purposefully meet missing many of those features, to keep the interface very simple and to keep what you need to learn to use it effectively, to a small set of things to learn, but that also it does definitely limit its usability. For anyone who wants more advanced editing features, it also lacks integration of many other things like refactoring tools that more fully featured IDs, have tools for running tests automatically tools for integrating with version control systems, and so on and so on. So, I think in many professional environments, when people are developing professionally, usually they work in an organization or at a team and they find themselves moving on to tools that are more widely used their team or integrate better with the toolset used by that team. The other side of idol that which we haven't discussed so far, but I think is one of its major advantages is for interactive work. And that's actually where so many ideas don't focus but idle has focused quite a bit. So idle shell its interactive environment which used to just enter commands and see their run command. See their output is very Very useful. It's relatively powerful even compared to alternatives. The it's more comparable to interactive environments such as ipython, or Jupiter. And so that is very usable and has some nice GUI features that are, for example, missing an iPod ipython. And it's also, its features are more easily discoverable things to the GUI compared to ipython, where you have to read more documentation or go through a tutorial. So I think for interactive use, I still use it almost daily for interactive things in Python, sometimes just for checking out. You know what a certain function does or getting, seeing the help for a certain class, it's very quick to just fire up idle shell and check it out there than to search through the docs. So I think that's sort of something that stands out in idle and makes it very usable, even for people who mostly use other environments for development.
     
    The Past, Present, and Future of Deep Learning In PyTorch
    2019-03-10 (duration 42m)
    [transcript]
    23:18 Yeah, that's a very good question as well. Basically, pytorch kind of started as a pure Python implementation, really, like, the whole framework was written in Python, except for the, like, core kernels that to like, the math, because for those, like, if you really want to match performance of existing libraries, you really need this part to, like, live in highly optimized C and c++. So, you know, but a lot of like, all of the, let's say, logic of the, of the framework, like automatic differentiation, and so on, we started with an implementation within Python, and then, you know, throughout the history with intent of moving away from this choice, like, we've been moving more and more things, c++, mostly from two reasons. One is that some of this code is like really hot, like it's getting executed, kind of at every single line of user programs. And so overhead of this code is really, really important. And Python sometimes just can't deliver the same performance, although like as an anecdote. One downside is that when I ported part of the automatic differentiation system, which was like 30 lines of Python code, I think it expanded into like, a few hundred lines of c++ code that like work with C, Python API, and so on. So it's definitely like, not easy to convert this. But we've been seeing like measurable and significant performance gains from this. But this is mostly because like, it is really hot code that's, like, heavily exploited by the users, we don't really typically see those kinds of speed ups, if like, you were to just write your let's say, machine learning model in Python vs in c++ itself. So this is really only relevant for the library. But but at the same time, you know, we have some users which are really interested in running models in pure c++, mostly because, like, they have existing research pipelines and c++, like a lot of reinforcement learning research on games, a lot of a lot of games, kind of only have c++ API's. And so those people naturally like started their projects and c++. So they would really like also to have hybrid features in c++. And this is something that we've been trying to address as well, like with their release the with at one point or release, we also have like a beta version of our Python bindings like our Python interface, except in c++. So that also required kind of, to move a lot of things from Python, c++, although at this point, we've been so far ahead in this work, that it wasn't too hard to actually add those c++ findings. So today, really, most of Piper is just a c++ library, and then like a thin layer of Python bindings on top of this.
Didn't find, what you are looking for? A podcast, an episode is missing? You can provide a feed-URL, that will be added to the database. You can do that here. In case you think, there's a bug, please contact us. Thanks!