search for: Florida Panthers
96 results (0.780 seconds)
group by podcast
Wie oft wurde "Florida Panthers" in Episodentexten gefunden?

podcasts

Wenn Du in Zukunft über neue Folgen zum Suchbegriff Florida Panthers informiert werden möchtest, lege einfach einen Alert dafür an.

search results

     
     
  •  
    BG Dorsten
    2020-09-23 (duration 1h12m)
    [transcript]
    24:55 Ja, gut. So, so wie wir's wollen. Ähm er kam jetzt, er hat ja jetzt äh zuletzt äh auf dem College gespielt in USA, ne? Ich glaube in Florida,
  •  
     
  •  
    BG Dorsten
    2020-09-23 (duration 1h12m)
    [transcript]
    24:55 Ja, gut. So, so wie wir's wollen. Ähm er kam jetzt, er hat ja jetzt äh zuletzt äh auf dem College gespielt in USA, ne? Ich glaube in Florida,
     
    Fremde für ein Jahr
    2020-08-25 (duration 2h54m)
    [transcript]
    2:41:02 die sind in sich sehr heterogen, da kannst du viel erleben. Ne, wenn du an der Ostküste der USA aufwächst, ähm, aufwächst und du reist mal an die an die Westküste oder in den mittleren Westen, runter nach Florida oder was auch immer,
     
    The Art of Trade War 2
    2018-09-21 (duration 1h28m)
    [transcript]
    23:44 Weil ich stell mir gerade vor irgendwie wie also eine Region im Norden von Kanada und Florida in USA das ist ja dann schon enorme räumliche Distanz und wenn du das quasi mal vier nimmst.
     
    Modellbau
    2018-07-23 (duration 1h24m)
    [transcript]
    27:47 Ja das war die Sache mit dem in Florida wo sie,
  •  
     
  •  
    UKW033 Corona Weekly: Mit dem eigenen PKW durch Bayern
    2020-07-01 (duration 1h37m)
    [transcript]
    22:03 Auch nicht mit der Überlastung der Krankenhäuser. Aber jetzt rüttelt das halt solange es langsam in die Südstaaten ein, Florida, Texas, Arizona,
     
    UKW027 Corona Weekly: Superrot und Supergrün
    2020-05-20 (duration 2h8m)
    [transcript]
    1:22:14 Ja wenn man solche zahlen überhaupt bekommt also in usa sieht schlimm aus also ich habe jetzt gerade noch so ein artikel vor die nase bekommen da gab's in florida irgendwie so eine so eine gruppe die extrem transparenten geiles.
     
    UKW014 Corona: Modelle und Prognosen
    2020-04-02 (duration 1h52m)
    [transcript]
    1:11:30 Ja vor allem ist ja auch die reaktion in den usa überhaupt nicht homogen also florida laufen sie immer noch alle rum und hängen an stränden rum
  •  
     
  •  
    EP8: The things that make me different make me, me
    2019-11-11 (duration 22m)
    [transcript]
    11:34 Florida
  •  
     
  •  
    A High Performance Platform For The Full Big Data Lifecycle
    2019-08-19 (duration 1h13m)
    [transcript]
    36:24 Those are also very good question. So in the case of for for him Cassie concept. So we need to go down to a little bit of a system architecture. So in Thor you have each one of the nodes that handle a primarily they are chunk of data, they are partition of the data. But there is always a body node, some other node that has also their own partition, but they have a copy of the partition of some other nodes. If you have 10 nodes in your cluster view your node number one, I have the first partition and my have a copy the partition that no den has no number two might have a partition number two, but also might have a copy of the partition that no no number one has, and so on so forth. every node would have one primary partition and one backup partition of the other nodes every time you run a work unit. He said that you did he mutable, but you are generating a new data set every time that you are materializing data on the system, either by forcing it to materialize or a by letting the system materialize the data when it's necessary. And the system tries to stream as much in this way similar more similar to spark or or TensorFlow where the data can be streamed from acuity to acuity without being materialized. And like my previous and at some point, he decides that it's the time to materialize because the next operation might require materialized data or because you've been going for too long with data that if something goes wrong with the system will be blown up with every time it materializes data, the lazy copy happening with a new data has materialized to these backup nodes. So surely there is there could be a point where if something goes very wrong, and one of the nodes dies and the data in the disk is corrupted, but you know that you have always another know that has ad copy. And the moment you replace you do with known as Khufu essentially pull it out put another one in the system will automatically revealed that missing partition because it has complete redundancy of all of the data partitions in all the different nodes in the case of Roxy. So in the case of Florida seems to be sufficient, there is of course, the ability to do backups. And you can backup all of these partitions which are just files in the Linux file system. So you can even back them up using any Linux backup utility or you can use HPC to backup for you into any other system you can have cold storage, some of the problems is what happens is where your data center is compromised. And now someone modified or destroyed the data life system. So you may have you may want to have some sort of offline backup. And you can all handle this in the normal system backup configuration, or you can do it the HPC and make it offloaded as well. But for Roxy, the redundancy is a even more critical in the case of for when a node dies, it is sometimes less convenient to let the system work in a degraded way. Because the system is typically as fast as the slowest node. If all nodes are doing the same amount of work, a process it takes an hour will take an hour. But if you happen to have one know the die that now there is one know that he's doing twice the work because he has to do deal with two partitions of data its own and the backup of the other one, the time to the process may take two hours. So it is more convenient to just stop the process when something like that happens. The note and let the system rebuild that note quickly and continue doing the processing. And that might take an hour and 20 minutes or 10 minutes rather than the two hours that otherwise you would have taken. And besides if a system continues to run and your drive your storage system died in one knows because it's old and there is a chance that either the storage systems, when they get under the same stress will die the same way you want to replace that one quickly and have a copy. As soon as you can do not run the risk that you lose two of the of the partitions. And if you lose two partitions that are in different nodes that are not the backup of each other, that's fine. But if you lose the primary node, and the backup node for that one, there is a chance that you may end up losing the entire partition which is which is bad. Again, bad if you don't have a backup and Leland returning back of some things next time. So it's it's also inconvenient. Now and the Roxy case, you are you have a far larger pressure to have the process continue. Because your Roxy system is typically explosive all to online production customers that may pay you a lot of money for you to be highly available.
     
    Straining Your Data Lake Through A Data Mesh
    2019-07-23 (duration 1h4m)
    [transcript]
    59:03 I think we pretty much covered everything I would probably over maybe emphasize a couple of points, I think making data as a first class concern, an asset, you know, and, and structured around your domains, it does not mean that you have to have a well model data, it could be your raw data, it could be the right events that are being generated from the point of urgency, but various product thinking, and, you know, self serve, and some form of IP understood or measured quality and good documentation around it, so that other people can use it, and you treat it as a product. But it doesn't necessarily mean we were doing a lot of modeling of the data. The other the other thing that I would mention, I think it's I guess we have already talked about it, I think the governance and standardization I I would love to see more standardization, the same way that we saw with the web, and, you know, with API's apply to data. So we don't have a lot of either open source, like a lot of different open source tools, or a lot of different, you know, kind of proprietary tools. But there isn't a, you know, there isn't a standardization that allows me, for example, to run distributed SQL queries across a diverse set of data sets. I mean, the cloud providers are in the race to provide all sorts of, you know, wonderful data management capabilities on their platform. And I hope that race would lead in some form of standardization that allows, you know, distributed systems to work. And intentionally I think a lot of the technologies we see today, even around data discovery is, is based on the assumption that data is operational data hidden in some database in a corner of the organization is not intended for sharing, but we need to go find it, and then extract the data out of it. I think that's Florida predisposition, I think we need to think about tooling that would allow intentionally diverse intentionally shared, diverse set of data data sets, and what does it mean? Like there's a huge opportunity for tool makers out there, I think there is a big white space to build next generation tools that are not designed to fight the data. The fight is bad data hidden somewhere. They're designed to share and make, you know, intentionally shared and intentionally treated as assets, data discoverable and accessible and, you know, measurable and clearable. But distributed Lee owned kind of data sets. So I think those are the few final points to overemphasize.
     
    Unpacking Fauna: A Global Scale Cloud Native Database
    2019-04-22 (duration 53m)
    [transcript]
    50:27 Evan Weaver: to me, the biggest gap is really that the server lists edge experience, like we're pushing the granularity of application building down to literally nothing like you had kind of a series of incremental paradigm shifts from physical servers to co located or leased servers to virtualized servers to containers, they're still all little servers. It's like, if you like every thought you had, you had to mentally conceptualize it as being in a book and in doesn't make sense from a productivity perspective to think about software, especially distributed software this way, like, who cares, like how many functions can run within one container, I don't, I just want to know if I have the aggregate capacity to execute the workloads my users are generating. And it requires a complete inversion of that abstraction, which we finally have now, for the most part, with server list frameworks and the compute side, we've had it for a long time with CDN and the caching side. But data, especially operational data is always the last thing to move, because it's the riskiest, so you can now get, you know, some server lists analytics capability with things like snowflake, but your canonical, operational user generated mission critical, you know, data, which is the existential underpinning of the business still lives and essentially, you know, a mainframe and what we're trying to push with thought, and what the entire industry needs to push is, you know, bringing this paradigm to its logical conclusion, which is you shouldn't have to care. Or even though as an application developer, how your data tier is operating, it should be completely orthogonal. And at the same time, as an operator, you shouldn't have to care what your applications are doing, like the the model of a DDA who has to like go in and like tune queries and make sure everything is safe to execute and fail over nodes to huts, barriers and stuff is an 80s model, we need to move past that to an arm's length utility computing service model where something's behaving badly. You know, in Florida, for example, if the application is consuming too many resources, lower its priority, you don't have to know what it's doing as an operator. And if you want global resources, as a developer, just provision a new database, you don't have even have to think about where this data centers are located. That's the experience we're closer to a server less and we're already there with CDN, but data is just harder, because the the, the quality bar is so astronomically high, because you know, I mean, the NO SEQUEL man was notorious for for essentially killing businesses, like dig comes to mind with their experience with Cassandra and people are smarter now. And they demand that their database vendors really do the work. But until until the vendors do like we're doing it for, we're still going to be stuck in that mainframe mindset.
  •  
     
  •  
    Was macht eigentlich ein Agile Coach?
    2019-08-12 (duration 58m)
    [transcript]
    35:03 um hier eine flexibilität zu erreichen des unternehmens das also die rahmen florida werden dass die veränderung möglich wird das eine durchlässigkeit von informationen und von menschen und von aktionen entsteht,
     
    Einblicke in die Arbeitswelt der US-Westküste - Teil 1
    2019-08-01 (duration 1h6m)
    [transcript]
    04:21 die hat der deren eltern in florida lebten ich hatte auch schon einmal ein jahr vorher in florian florida gelebt ich kannte das also daher ganz gut 07:02 modell eisenbahn online zu verkaufen schweizerische modell eisenbahn mit dem mit dem freund von mir dort in florida 04:09 Damals bin ich zuerst nach florida gegangen und zwar ähm hatte ich das wieder so oft im leben passiert äh meine meine meine damalige freundin und später frau war 'ne amerikanerin
  •  
     
  •  
    Algorithmic Trading In Python Using Open Tools And Open Data
    2019-06-17 (duration 50m)
    [transcript]
    42:09 Jared Broad: to further to give some background to that. We have terabytes of this price data. So futures options, equities, forex, crypto, and really though to to build an algorithm, there's a little bit more than just price data that's required. So most people, they say, hey, when we just apply this technical indicator, to this price data, and that'll be our strategy. But most of the time, because that strategy, that because all of that price data is open all of those technical strategies that might have worked in the 1970s 1980s, they don't really work in live trading anymore. So the whole financial industry is moving towards using alternative data in their strategies. One of the big thing I think that's happening at the moment that's really important for the community to understand is this move from focusing on price data alone, to focusing on a core hypothesis. So when people are designing an algorithm that they're pulling in much more of the scientific method. So instead of just saying, I'm going to hack around with a starter until I find something that fits, they first need to start with an idea. And that idea has to be something fundamental, that will move the markets. So for example, you might say that, when there's more sunshine, you're going to have more oranges produced. And that is going to cause a surplus of supply of oranges. And so orange juice futures contracts are probably going to fall. And so there's a definite cause and effect of that relationship between the sunshine and the orange juice. And it might be weak, and it might just be a hypothesis, but then you need to go and you test your idea, and you see if it has merit. And so you might pull in things like if the Federal Bank increases interest rates, then the mortgage rates are going to go up. And so thus, any real estate investment ETFs are probably going to go down, because they won't be able to buy as much with the capital that they have. And so there's lots of different calls in the fix, whether it be in the real world, like looking around you and seeing how things interact, or in sort of the virtual world, in the financial markets, and how different mechanisms interact with each other. But really, to, to, to monetize those things, your algorithm needs to be connected to what's called these days as alternative data is data, which is not just the price data of the markets. It's, it's a signal that's about the world, you know, it's a signal about sunshine, and the current sunshine hours in Florida, or, you know, the current Federal Interest rates, and using that as a way that your algorithm can start to trade and act on those signals. And so it can connect recently, we've started this project, to work with alternative data vendors, and get them to import their data into the continent repository. And that way our community can go and design these epically powerful algorithms on not just price data, but all of this alternative data that covers your social sentiment, Twitter,
  •  
     
  •  
     
  •  
    Wir streamen seit 45 Minuten
    2019-03-28 (duration 1h43m)
    [transcript]
    19:25 Ja ich sag noch schlimmer soll ich meinen Strandkorb da auffressen in Florida.
  •  
     
  •  
    FS231 Marmorkuchenstadt Hommelbach
    2019-02-07 (duration 4h5m)
    [transcript]
    25:20 Wohin in dein komisches Werk Florida Reinier Apple Note Apple Notes ach das vitalen Frieden Illnau.
     
    Mein Hotel brennt
    2018-07-06 (duration 4h37m)
    [transcript]
    1:28:46 aber nicht mehr so viel also ich meine Jamaika ist noch ein ganzes Stück südlich von Kuba was ein ganzes Stück südlich von Florida ist also das ist so richtig weit ist es da nicht mehr bis zum Äquator klar ist es noch ein Stück aber.
Didn't find, what you are looking for? A podcast, an episode is missing? You can provide a feed-URL, that will be added to the database. You can do that here. In case you think, there's a bug, please contact us. Thanks!