• 14 Posts
  • 7.09K Comments
Joined 3 years ago
cake
Cake day: June 15th, 2023

help-circle




  • Does it count as Generational Wealth if future generations must contribute?

    I would certainly think so. Money is fungible.

    Lets say your grandfather gave your father $10,000 in your father’s teen years, and your father used that to launch into adulthood successfully. Your father grows his own wealth while living, but also gives you (the inflation adjusted) $10,000 to you in your teen years just as his father did for him. Since money is fungible, and your father wasn’t penniless when he have you money, that $10,000 effectively came from your grandfather.

    You are correct, and I agree what what you are saying. I had hoped my last paragraph would imply my feels on my actual position and to counter the focused brevity.

    I read that and thought those statements doubled down on your definition of generational wealth being only from a family’s progenitor, and not also contributed to by their offspring.



  • At 2 children per household, the number of households being supported grows at a larger rate than reasonable investment gains. The tripling time at 4% real gain is 28.01 years.

    This statement seems to suggest your thinking is that your wealth today will be the only wealth ever introduced to the system. While that’s possible, it isn’t usually how generational wealth works I think. Each of your successors also adds to the wealth with their own efforts and endeavors.

    The most important time in their lives for people to have wealth isn’t at the end, but at the beginning. Education is expensive, housing is expensive. Initial opportunities require money when you’re young. Once you’re established, you have your own resources to continue to grow.



  • So a couple of things. Just so ya know, when The Onion started in the late 1980’s and early 1990’s, it did not loudly announce itself as satire the way people view it now. Early print issues were laid out exactly like a local newspaper.

    Yep, I know. If you were riding the DC Metro in those days you’d find the paper edition in paper dispensers right next to the Washington Post and the Wall Street Journal (pre-Murdoch ownership). I think I may have a couple of the old ones in a box somewhere.

    You’re also wrongly assuming satire must: Be obviously humorous.

    I never assumed satire had to be humorous. I said the Onion was satire, and it was humorous.

    We’re Twilight Zone/Black Mirror,/J.G. Ballard/early cyberpunk (before the game) journalism, not current era Onion listicle satire.

    You’re making my point for me. All of those sources were known to the consumers as fiction and allowed to consumer to compare reality against that speculative future. You’re not doing that. You are well aware that most folks that read what you’re writing assume its true. You can’t have it both ways.

    And let’s not overlook the fact that tomorrow or the day after, or soon, many of my articles will no longer be fiction.

    I think you’ve moved beyond satire and are headed toward disinformation now.

    tl;dr: Explaining the joke, ruins the joke.

    Well hang on, you just said your intent wasn’t humor. Wouldn’t that make your actions humor at the expense of people reading what you’re making up?


  • Can I ask what your goal is? You’re referencing the Onion which is a satire site. The Onion’s goal is to entertain, and provide some social commentary with the protection of satire. Also with the Onion, it is well known it is satire. Readers are “in on the joke”. Your description from the sidebar doesn’t look like that.

    The way I read your sidebar message is that all (most?) news in the world is fake and that no news from any source should be trusted at all. It also reads like you’re taking joy it deceiving the readers. The sidebar language suggests no one should trust anything. Am I taking your wrong message or is that what you intended?



  • Its slightly worse than you may think. Prior to about a year ago there were no mechanisms available to block this type of attack from a thief from any of the big three brokerages. I give a lot of credit to Fidelity for not only developing a tool, but making it easily user accessible through the customer’s web session.

    Six months ago Vanguard also had no mechanism. However between then and now, they’ve at least developed a process on the backend to put the functionality in place, but they have yet to make a tool that can enable it from the customer’s web session. So while its a bother to call in (and many Vanguard agents don’t know about it yet like the one I initially talked to), I appreciate there is an avenue to put the block on now instead of just hoping and praying the theft doesn’t happen to you. I fully expect Vanguard to make accessing the tool much easier to turn it on an off in the near future.

    Schwab hasn’t even built an internal tool yet, so phone call or not, all those customers are still at risk.


  • So, what prediction did Bezos make back then, that seems particularly poignant right now? Bezos thinks that local PC hardware is antiquated, and that the future will revolve around cloud computing scenarios, where you rent your compute from companies like Amazon Web Services or Microsoft Azure.

    This isn’t a new idea, and it certainly predates Bezos.

    I’m older now, but throughout my life there has been a pendulum swing back and forth between local compute power vs remote compute power. The price of RAM going up follows the exact same path this has gone half a dozen times already in the last 50 years. Compute power gets cheap then it gets expensive, then it gets cheap again. Bezos’s statements are just the most recent example. He’s no prophet. This has just happened before, and it will revert again. Rinse repeat:

    • 1970s remote compute power: This couldn’t really compute anything locally and required dialing into a mainframe over an analog telephone line to access the remote computing power.

    • 1980s local compute power: CPUs got fast and cheap! Now you could do all your processing right on your desk without need of a central computer/mainframe

    • 1990s remote compute power: Thin clients! These were underpowered desktop units that could access the compute power in a server such as Citrix Winframe/Metaframe or SunOS (for SunRay thin clients). Honorable mention for retail type units like Microsoft WebTV which was the same concept with different hardware/software.

    • 2000s local compute power: This was the widespread adoption of desktop PCs with 3D graphics cards as a standard along with high power CPUs.

    • 2010s remote compute power: VDI appears! This is things like VMware Horizon or Citirix Virtual Desktop along with the launch of AWS for the first time.

    • 2020s local compute power: Powerful CPUs and massively fast GPUs are now now standard and affordable.

    • 2030s remote compute power…in the cloud…probably




  • In an ideal world, as they see your knowledge is harder and harder to replace, they’ll start paying more for it

    This is true and happens to me.

    , and that will hopefully be encouraging enough to the current workforce to learn the skills.

    Here’s the challenge. Someone new that doesn’t have the skills that is enticed by the money has to make two evaluations:

    • How hard is it to learn the skill?
    • How long with the skill be marketable?

    For me to learn the skill wasn’t difficult because is it was modern and contemporary technology at the time. Training and support resources existed, and I was able to incrementally learn how those older technologies continued to evolve or be accommodated as new technologies arrived to replace them, but then didn’t. That won’t be the case for someone new. They can’t even use the old training material I used (assuming it was even still around) because that was written assuming the technology pervasive and well supported while the opposite is true today.

    As for marketability, this is an even larger gamble. Many of these technologies should have been retired decades ago, but weren’t for a variety of niche reasons. No organizations are putting out new deployments of these old technologies. The customer base/employers wanting these skills decrease every year as old legacy systems are finally retired leaving even fewer opportunities for a new person to exercise these newly acquired old skills. Its a fact that someday there will be no users of them, but when will that be? It should have happened already so what new worker would want to try and gamble on going into extensive learning on technologies that should be dead by the time they master them?



  • Thw issue youll run into is effectiveness at that small scale, sonyoull be tempted to share data with other systems like that, and eventually you’ll end up creating a different flock.

    I wonder if a segregated system design could address this. Similar in-system segregation like a TPM for the actual detection/matching part of the system separated from the command and control part.

    As in, the camera and OCR operations would be in their own embedded system which could never receive code updates from the outside. Perhaps this is etched into the silicon SoC itself. Also on silicon would be a small NVRAM that could only hold requested license plate numbers (or a hash of them perhaps). This NVRAM would be WRITE ONLY. So it would never be able to be queried from outside the SOC. The raw camera feed would be wired to the SoC. The only input would be from an outside command and control system (still local to our SoC) that and administrator could send in new license plates numbers to search against. The output of the SoC would “Match found against License Plate X”. Even the time stamp would have to be applied by the outside command and control system.

    This would have some natural barriers against dragnet surveillance abuse.

    • It would never be possible to dump the license plates being searched for from the cameras themselves even by abusive admins. The only admin option would be to overwrite the list of what the camera is trying to match against.
    • The NVRAM that contains the match list could be intentionally sized small to perhaps a few hundreds plate numbers so that an abusive admin couldn’t simply generate every possible license plate combination effectively turning this back into a blanket surveillance tool. The NVRAM limit could be implemented as an on-die fuse link so that upon deployment the size could be made as small as needed for the use case.