Art.22 ¶1 declares:

The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her.

without stating who is liable for infringements. Paragraph 3 says

the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision.

That assumes the data controller is aware of and in control of the AIDM. Often data processors implement AIDM without the data controller even knowing. Art.28 ¶1 says:

Where processing is to be carried out on behalf of a controller, the controller shall use only processors providing sufficient guarantees to implement appropriate technical and organisational measures in such a manner that processing will meet the requirements of this Regulation and ensure the protection of the rights of the data subject.

Of course what happens in reality is processors either make no guarantee or the guarantee is vague with no mention of AIDM. So controllers hire processors blindly. When the controller is some tiny company or agency and the processor is a tech giant like Microsoft or Amazon, it’s a bit rich to put accountability on the controller and not the processor. The DPAs don’t want to sink micro companies because of some shit Amazon did for which the controller was not even aware.

As a data subject I have little hope that a complaint of unlawful AIDM will play out. It’s like not even having protection from AIDM. Article 29 Working Party wrote AIDM guidelines in 2017, but they make no mention of processors.

  • iii@mander.xyz
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    6 months ago

    who does not even necessarily know the processor they outsourced to uses unlawful AIDM.

    They should! That’s the point! They shouldn’t use bad products, regardless of if it’s home made, from a small 3rd party, or a large 3rd party.

    It would be far more sensible to hit Microsoft or Cloudflare with the liability

    Why is that? It’s not cloudflare’s responsibility if a, to them, 3rd party illegally uses their services.

    If a restaurant buys nails and puts it in their food, it’s not the nail manufacturer that’s at fault. The argument “but it’s a large nail manufacturer” doesn’t take away one’s own responsibility.

    • debanqued@beehaw.orgOP
      link
      fedilink
      arrow-up
      1
      ·
      edit-2
      5 months ago

      They should! That’s the point! They shouldn’t use bad products, regardless of if it’s home made, from a small 3rd party, or a large 3rd party.

      Yes they should, but investigative journalists are not a competent way to have that information disclosed. When the processor secretly uses AIDM and conceals that from the controller, holding the controller EXCUSIVELY¹ responsible is reckless because the controller does not have right to inspect the servers and code of the processor. It’s a black box. The GDPR requires processors to disclose a lot of GDPR factors in their contract with the controller. But AIDM is not one of them. It is perfectly legal for a processor to (e.g.) write an algorithm that treats black people different, and not tell the controller. Putting the responsibility on controllers to investigate and discover unlawful practice is not a smart system.

      If a restaurant buys nails and puts it in their food, it’s not the nail manufacturer that’s at fault. The argument “but it’s a large nail manufacturer” doesn’t take away one’s own responsibility.

      For this analogy to work, the nail mfr would know that the nails are being put in the food. With knowledge comes responsibility. If the nail manufacturer is aware of the misuse, the nail mfr is willfully complicit in the abuse. But also to make the analogy work, the restaurant would have to be also unaware that the nails were ending up in the food (because AIDM is undisclosed in the case that you are trying to make an analogy for).

      (update) Europe does not have the machinery to bring thousands of small mom and pop shops into court. It just makes no sense from a logistical standpoint and it’s a non-starter economically. Though I do not oppose controllers having liability. They should retain liability. But processors should also have liability, when you have one giant processor who is the cause of hundreds of thousands of people’s rights being infringed by way of thousands of controllers. To neglect the giant is to fail at data protection.

      ¹ added that word late! Controllers should be accountable, but not exclusively.

      • iii@mander.xyz
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 months ago

        I think you’re approaching this from wrong trust model. You’re trying to answer: “how can I know if the 3rd party I’ve chosen operates legally?”

        The answer is always you don’t, untill you’ve been given sufficient evidence that they do. The restaurant should not put ingredients into their food that they don’t know is safe for consumption. The website operator should not integrate with 3rd parties unless they’ve proof there’s no illegal behaviour going on.

        You don’t need an investigative journalist. It’s clear from the get-go that a closed source US product is a black box that you shouldn’t integrate with. Just as it’s clear from the start that you shouldn’t put nails into a spaghetti.

        • debanqued@beehaw.orgOP
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          5 months ago

          It’s a black box. You can’t know what you don’t know when the information is concealed. Blackboxes can be tested (we call it blackbox testing). But it is inferior to clearbox testing. It’s too costly and ineffecient to wholly rely on. The giant processor has the resources to disclose their use of AIDM. The micro-controller (as in small data controller) does not have the resources to exhaustively simulate hundreds or thousands of demographics of people. They don’t even have the competency to be aware of all the demographics. It’s guesswork and it’s a non-starter. If the controller had that kind of resources, they would not be outsourcing the first place. Not only is it impractical, it’s also inefficient. To have thousands of small businesses and agencies carry out duplicated tests is an extremely wasteful use of resources and manpower. It just makes no sense. The processor already knows who they discriminate against.

          The blackbox testing happens to some extent regardless. But there is no incentive to do the testing before deployment. The shitshow we call /GDPR enforcement/ ensures that data controllers do their testing on the public. Which means people are harmed in the process of testing because it’s cheaper for the controller (who knows their chances are low of getting penalised by DPAs who are up to their necks in 10× the workload they can handle).

          • iii@mander.xyz
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 months ago

            Exactly: don’t use the black box. You don’t know what it does! It makes no sense to trust or use it!

            • debanqued@beehaw.orgOP
              link
              fedilink
              arrow-up
              1
              ·
              edit-2
              5 months ago

              Exactly: don’t use the black box.

              That is not what I said. I never said don’t use it. I said black boxes bring problems that require sensible policy.

              Of course it makes sense to use black boxes. Someone running a bakery does not have the competency and resources to deploy an email service. Outsourcing email is the only option that makes the business case viable, unless they discard email entirely, in which case they lose business from customers who insist on emailing orders. From there, all processors are black boxes. There is no email provider who gives you the keys to castle. And even if they did, as a baker you wouldn’t know what you’re looking at anyway. Your choice is, use the black box or get into the tech business.

              Not even Microsoft can handle email alone. They outsource to Spamhaus, another black box. And Spamhaus outsources to Cloudflare – yet another black box.

              • iii@mander.xyz
                link
                fedilink
                English
                arrow-up
                1
                ·
                edit-2
                5 months ago

                That is not what I said.

                I know, it’s what I said. The sensible thing is to not use them.

                The options (1) use black box, (2) start a tech company, as you presented in the bakery case, is a false dichotomy. Managed open source is the middleground.

                • debanqued@beehaw.orgOP
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  edit-2
                  5 months ago

                  The options (1) use black box, (2) start a tech company, as you presented in the bakery case, is a false dichotomy. Managed open source is the middleground.

                  It’s a false middleground. It is still taking on the burden of tech knowledge. It’s a true dichotomy, as follows:

                  ① use a black box
                  ② become technical

                  (or trichotomy if you figure the baker can nix email)

                  You still have to understand what’s going on in the FOSS box even if it’s managed – otherwise you are in the same position. The point in being managed is to perform the work you don’t understand. That managed box is still likely to use a Spamhaus gatekeeper or the like which the baker has no clue about. The baker is still unlawfully using AIDM, unwittingly, because he just saw the ad for the managed service saying “spam free” – thinks that’s good but has no idea what questions to ask or how it can go badly. He could just as well ask the relevant questions to the blackbox provider. Just the same, his business carries on uninformed about GDPR infringement.

                  BTW, you’re also wrong about managed open source services giving you the needed info, even if the customer is highly technical. I use a managed service of FOSS s/w. I can see the source code that runs on the box but I cannot see how it is installed or configured. The account dashboard I get is nannied subset of control. I can do basic tasks like create users, but I cannot see the backend configs or even an inventory of other software running on the host. There could be all kinds of snooping and shenanigans on that host and I have no way of verifying it. It could be littered with AIDM abuses, but I don’t have a root shell account on that host.

                  It’s the same problem in the end. The data processors have no legal accountability for the logic that they control. At the same time, they are not even required to disclose the AIDM logic, or even the existence of it, to the data controller. Yet the controller is exclusively liable for what they potentially do not control – or even have awareness of. This is all still possible if the processor runs a managed open source service.