ChilCast: Responsible, Ethical AI Through Philosophy and Data Science

by | Jul 7, 2023

On this episode of ChilCast: Healthcare Tech Talks, we had the opportunity to have a conversation with one of the leading philosophers of technology and AI, Dr. Mark Coeckelbergh, and data scientist Jona Boeddinghaus, who are founding members of Dai.ki, a collective of ethical, legal, design, and data science practitioners passionate about responsible AI. Dr. Jody Ranck leads the discussion on the implications for society and healthcare as we collectively continue to implement AI technology.

The conversation ranges from how we need to think about power and AI, to regulatory discourses and their limitations, to an overview of a case study of a recent engagement they led with a hospital in Basel, Switzerland on implementing a large language model in medicine and healthcare. Stream this episode below or wherever you get your podcasts!

Transcript:

Jody Ranck: [00:00:14] Welcome to Chilmark Research’s podcast, ChilCast. I’m Jody Ranck and I’ll be your host today. And today we’re fortunate to have with us two individuals from the consulting group Digi-key, based in Vienna or the EU more broadly. We have Jona Boeddinghaus and Mark Coeckelbergh with us today, and they’re going to talk about everything ranging from LMS to political philosophy of AI and to where we need to go in the regulatory discourse and beyond for thinking about how to do AI responsibly in health care. So welcome, Yonah and Mark. Glad to have you.

Jody Ranck: [00:00:50] So we’re first going to begin talking to Mark Coeckelbergh, and you may be familiar with him with his quite an impressive number of books ranging from the political philosophy of AI to the MIT series on AI Ethics and robot ethics, has an interesting book on self-improvement that many of our listeners in digital health should definitely read be rather counterintuitive to much of the mainstream digital health world, but very important to how we need to think about some of these tools in the wellness discourse. Jona Boeddinghaus is the COO of Dai.ki, and he’s going to talk to us quite a bit about some of the more technical aspects of their work in responsible AI with a hospital in Basel. So I first want to begin with Mark, if you want to give us more of an intro to your background and political philosophy and why you think political philosophy as relevant to AI and sort of the insights that you’ve gotten from digging into AI around issues like power and power in society, technology and so forth, that has been the focus of your work.

Mark Coeckelbergh: [00:02:02] Yeah, I’ve been doing thinking about technology for quite a while now. And when I was first working on on the ethics of AI, ethics of robotics also before that I noticed that, yeah, much of the discussion is, is at the individual level, for example, individual privacy, individual well-being and so on, also in health care. But many of the challenging questions where we have to decide what to do and what is right are situated at the level of of the political. So I started doing more research into the political aspects and used also my background in political philosophy to do that. And then I wrote that book, The Political Philosophy of AI, where I discuss some key principles, key political principles like freedom, justice and democracy and and, and explore what they mean in the light of these new possibilities of AI.

Jody Ranck: [00:03:04] And could you tell maybe elaborate a bit more on our audience. Folks in the technology world, people interested in health and health care, but maybe unpack that a bit. What some of that work on democracy, justice, freedom and so forth. In regards to AI, why should folks in health IT, health care care about those issues, if you could kind of broaden their horizons around that?

Mark Coeckelbergh: [00:03:33] Absolutely. Yeah. So health care is is when it comes to ethics is often seen as a as a field that should be governed by by some bioethics principles like “do no harm” and “be beneficent to the patients.” Respect patients autonomy. And so we already have quite an ethical framework for for medical and bioethical questions. But I think what’s what’s lacking there is a is a focus on exactly what AI is doing in health care and and what could be normative questions arising from that can give an example. For example, if you have image recognition technology where AI is used to to see patterns in in images and can be used in that way for diagnosis, I think this this is really helpful kind of application, but it also raises some ethical issues. For example, if the data this system is trained on is is biased in some way, for example, only is only developed for for male patients on the basis of data from male patients, but then applied more, more widely then then this can be an issue. And so there are some specific problems there. And and some of these have have a political aspect by this one because it relates to bigger question in our society, what’s a fair way of treating people? And, you know, what does it mean not to discriminate, for example? You know, it can be justified sometimes to make a difference between different groups of people, but when is it justified and when not? And so I think there the you know, the sort of micro situation, the ethical situation with the doctor, the patient and the AI is linked to that broader kind of discussions.

Mark Coeckelbergh: [00:05:24] And I think with political philosophy, we can we can go to these discussions. Another issue, for example, is, you know, within that doctor patient relationship, you want the patients to have autonomy and you have the the the modern way of thinking about this, that that patients also have a voice, that there’s not just the authority of a doctor, but there’s like involvement, participation now that connects to a wider political issue also like how do you treat citizens? Do you involve them in decision making? Also decision making about AI? Do you involve them about the big questions about how to how to organize society in a fair way, for example, So there are lots of issues, I think, where there is a political side and I think we can do ethics of health care and ethics of AI in health care in a way that looks at both the ethical issues, in the interaction of people with technology and with each other and that sort of wider level where also huge decisions have to be made, for example, about the financial resources that are always limited, always scarce, how to distribute them, how to use them for these technologies, where to use them first.

Mark Coeckelbergh: [00:06:39] You know, to what extent are we going to automate health care? Is it all right to to automate what’s the place of the human in there? Because another big question is responsibility. When you automate with AI, you might want to replace the doctor. And if that’s not the case, there’s still the question if something goes wrong, for example, false positives or false negatives. You know, when when the system, for example, says that you have cancer but you have not or, or it does not detect the cancer, but you have it. These are ethically relevant problems, of course. And the question is like, if if we delegate medical decisions like that about diagnosis to the machine, who is responsible when when that machine makes an error? And do we accept such errors? And again, that has a political aspect because to what extent do we as a society accept risks and what risks, what level of risk are we prepared to tolerate, For example, in traffic, we’re prepared to tolerate that there are many deaths on the road. How, you know, what are we prepared to tolerate with these new technologies? So I think these are the sort of bigger political debates that we will need to have in our society once we apply across the board in in medical applications and medical practices.

Jody Ranck: [00:08:05] On that note, do you have you seen anything interesting, at least in the European perspective, on how as a society, like, do we need additional new or maybe just use existing institutions in novel ways? Because, you know, the issue of who’s the we and making the decisions around these, you know, in determining these thresholds. Like if we look at autonomous vehicles, how many pedestrian deaths do you tolerate while they experiment to possibly never get autonomous vehicles actually working in a safe way in health care? We think about you know, we have a lot of patient groups that work to inject patient voices into clinical trials and, you know, into the regulatory discourse and so forth. But at least here in the US, it’s you don’t see this anywhere near what I think what we’re going to need sort of the citizen engagement with AI and having the avenues to do it. It’s, you know, medicine tends to be fairly hierarchical. And then we rely often, in my opinion, too heavily on bioethicists, like from the Human Genome Project era. We had the bioethics discourse. But, you know, I’m a Berkeley social scientist and so we had our you know, we’re all trained in Foucault and had the influence of Paul Rabinow. Like, we need to even question the bioethicists here at times and where that the principle ism and certain ethical discourses fall short because there is a political issue here. So, you know, what I’m driving at is do you see anything interesting or innovative in terms of how we engage with the citizenry about AI and especially in regards to health care?

Mark Coeckelbergh: [00:09:57] Yeah, that’s a good question. I think it raises the bigger issue in our society what place expertise and experts should have. And given that we have these advanced technologies which are not very transparent even to experts sometimes, and who are certainly not well understood by laypeople. The problem is, is that that there is this danger that we delegate our decisions about the future of those technologies and the future of health care to to specialists, to medical specialists on the one hand and on the other hand also the specialists and indeed the ethics people. So ethics also is is an industry is also now, you know, a significant part of of of of the the fields of expertise, the landscape of expertise there since since the beginning of philosophy there is the temptation to say like, well, let the experts rule, let the experts decide what is best for us. And the truth in that is that indeed, in this kind of complex, modern society, we need the experts. Absolutely we need them because it’s a very the technologies are difficult to understand. And the the social and political problems are complex. But on the other hand, we you know, if we want democracy, if we think that it’s important to take people seriously as also like autonomous people and as citizens who should have a say in in in our technological future, it’s very important to involve them.

Mark Coeckelbergh: [00:11:28] And I think we have already some methods to do that, especially social scientists like yourself know about these methods and think we, if we want, we can have more deliberative and participative ways, both at the general political level and at the level of organizations in health care, for example, at the level of of hospitals and so on. So I think there’s a lot possibility there to make things more democratic and participative, but we have to find the right mix with expertise. How do we bring together experts and laypeople? How do we steer guide these kind of discussions? Because if you just put people together, it’s not necessarily a good discussion. So you need mediators. And for think, for these reasons, it’s good if if you know about academics and people in industry, think about new institutions and new procedures that enable people to come together in a in an organized and effective way where both the experts and non-experts have a say and and can together talk about these these ethical issues with with AI in general and AI in healthcare.

Jody Ranck: [00:12:47] Have you seen any usages of citizen juries and things or those citizen councils in the domain of AI? I mean, I know they’ve had them different places around the world in regards to other technologies. Have you seen anything in regards to AI?

Mark Coeckelbergh: [00:13:04] Yes, I’ve seen that. For example, in France they had this kind of citizen councils. There are also IDs at American universities about using minipublics, some randomly selected people, for example, that that then are put together and deliberate about AI. So I think it’s starting that people think of how to apply these methods to the governance of AI. I think in health care, as you said, there is a sort of hierarchical, you know, slightly authoritarian tradition. I think we see, however, that that, you know, with patient organizations and so on and and with some experience of involving patients having a say in in medicine and healthcare, I think we can expand on that and sort of strengthen that side. But I think it does need a policy that facilitates this. I think people are not going to do it by themselves necessarily. So I think top down some some initiatives that are there need to be supported and there needs to be more research on on how to do this, because it’s not entirely clear how.

Jody Ranck: [00:14:18] There’s an interesting book. I’m sorry, I’m blanking on the author’s name. He’s a law professor at Cornell. It’s called The Voices in the Code. You may be familiar with this, but he looked at the whole history of the development of the algorithms for kidney transplant, the queues for kidney transplants. And and essentially what you see is, you know, it took about 10 or 15 years of this back and forth between sort of the expert knowledges, patient groups and growing awareness of who gets excluded and one form of the algorithm or the bias in the algorithm, so to speak, over time. And this back and forth over pushing to get different voices, different subjectivities and so forth in that discussion. And it’s a really interesting case study of how, you know, how this plays out in health care, which in that case it took an incredibly long period of time to and it’s still, you know, it’s open ended. The main algorithm used to monitor kidney function, for example, in the US at least, was based on a 1999 study that treated race as a biological construction rather than a social construction, which was flawed science in regards to glomerular function. And the result of that algorithm was black patients got pushed lower in the kidney transplant list and got recommended medications later.

Jody Ranck: [00:15:50] So from 1999 until January 2023, that algorithm was used on the basis of flawed science. And that same thing goes with lung function on and on and on. There are a lot of these algorithms that are being used, but many of them don’t have the patient groups to push the kidney function. One had a very organized patient group, but even then it just recently got changed. So I think there’s a way I’m hoping that there’s a way to learn from some of these patient group activities and these citizen juries and so forth, and maybe come up with a more expedient and thoughtful way of regulating or challenging algorithms where they go wrong or developing algorithms from the very get go, developing them in a different way, I hope. But on that note, on the issue of developing algorithms and getting more towards the the data science technical aspect of responsible AI, although I don’t view them as separate, Why don’t we shift to Yonah now? Tell us a bit about Digikey and how you why you created it, what your mission is, and then maybe give us an example of one of your recent activities with the hospital in Basel that you’re working on and how you engage with that client.

Jona Boeddinghaus: [00:17:08] Yes, happy to. Thanks. So let me start with the bigger background. We are in machine learning and AI development for many years now over eight years, actually. So basically experienced the complete AI hype cycle forth and back. We are doing a ML development and projects in health care for many years now. Different areas cardiology, pathology, radiology always as the partner and software partner. So we’re working with domain experts, physicians or doctors in different different areas and use their expertise to define problems, develop the solutions together. And then we are the ones really developing the bespoke machine learning algorithms or AI solutions most of the time packed in a complete software solution to, yeah, produce great, great health care products. So what we start to realize, realize is that ML development is driven really very much on a practical level by data scientists and and technical engineers, which works of course, when it comes to pure ML development. But there are limitations when you try to be successful, especially in health care, when it comes to sustainable solutions and and user acceptance. So even if you don’t think about AI ethics, it’s a big issue for companies, especially smaller ones, how health care institutions can adopt AI systems in the long run.

Jona Boeddinghaus: [00:18:41] There are so many pilots in the space, but not many products that actually go into production. Two years ago, we then had a funded research project about ethical AI, so we together with Mark, started this research company where we as developers really tried to explore how we can embed these principles or these ideas in this field, especially into the everyday machine learning development practice. So the idea was really how can machine learning developers, data scientists work together Quite. Practically with ethicists designers So multidisciplinary teams to. Yeah. Take this whole development to a more holistic and hopefully more sustainable and successful level. That was the origin. And yeah, the research project was, was so interesting and went so well that now we funded Daiki. So Mark and I and six other great founders, it’s now three months old and our mission is really to help other teams like us integrate, develop and use AI in responsible way in health care and other domains. Health care obviously is a very strong focus.

Jody Ranck: [00:20:04] And you in our previous discussion, you mentioned you have a client from a hospital in Basel. Maybe you could tell us a bit about that client, why they came to you and how you began that engagement with them.

Jona Boeddinghaus: [00:20:17] Yes. So the spousal client is basically a long term customer or partner of us did several projects with them already and just recently they came to us with the idea to use large language models for their daily reporting issues. So the idea is really the physicians take their notes, they do their diagnosis and their daily routines, and then at the end of the long shift and the long day, they need to compile and write these these long reports. And that’s a really quite tedious task and is usually seen as quite some overhead in the US.

Jody Ranck: [00:20:55] This is often referred to as the pajama time. At the end of the day when they go back home, this is eating into their private lives and it’s one of the one of the factors that leads to a lot of clinician burnout. The very important issue.

Jona Boeddinghaus: [00:21:11] Yes, exactly. And so the idea was really to use the data that is already there anyway. And based on this this data produced these reports, partly automated. Of course, an idea that that came up with the advance of all these new, very powerful large language models. So it’s a very interesting project from both a technology point of view, but also it raises quite a lot questions about responsible and ethically.

Jody Ranck: [00:21:39] And what are some of those issues? Because the reason I ask is in the last few months since generative AI has become the raging storm that it is. You know, it’s interesting to watch on LinkedIn. A lot of the clinicians and many of the clinicians I follow are deep into AI and bioinformatics. And recently I’ve seen now that they’ve had a couple of months to experiment and play with these things. I see physicians saying, Well, hold on here. Yes, we need something to reduce that so called pajama time and automate some of this. But there’s also this thing called thinking that happens when we write notes. At the end of the day, you can reflect on those notes, think about them further, analyze. And if we automate that part out, then clinical skills may begin to decline. So to me that represents a bit of a design challenge. Is there a way to automate out the the drudgery of that, but without losing the thinking component? I mean, have you seen anything around that or, you know, what’s your experience at Basel been so far? And Mark, if you have any thoughts on that too, because you’re thinking about automation and robotic ethics. I’m sure you’ve thought about that as well.

Jona Boeddinghaus: [00:23:03] Yeah. And also maybe on the question whether. Or what’s thinking in this context actually means, especially regarding with LMS. Also very, very interesting topic that Mark can answer better, of course. Yeah. Maybe just just my my take on this for this Basel project. So yes, it’s a very interesting question and it almost general pattern that occurs in all these projects. I guess what you in the end you want to reduce overhead but at the same time have a very clear focus and a precise thinking element involved there. It also comes down to this responsibility point that Mark Mark mentioned before. You want a responsibility and human oversight that is indeed anchored in the clinician, sitting in front of the computer and really signing the report and not just sign it, but really be responsible and know what’s what’s in there and have his his check. So that’s that’s really interesting. And I think, as you said, it’s indeed a design question. And that’s exactly the reason why we think with diki you need this holistic approach. It’s not enough to solve this whole problem on a pure technical data science, machine learning level, because even the best large language model can only achieve a level of accuracy that still doesn’t fulfill this responsibility and liability requirements. So it’s indeed a design question. So, for example, what we are trying to do there in Basel is with user research and of course a lot of communication with the partners. Think about ways how we can really steer attention to sections that the AI produces. So for example, not like checking or signing off the complete report, but really section by section controlling and checking what is in there and have an interactive system that helps the clinicians see what’s going on there.

Jody Ranck: [00:25:03] And in terms of your engagement with in Basel, you talk a bit about the data governance, privacy dimensions of that because I know LMS face some additional issues around that with data leakage and so forth. And in health care, that’s that’s a big deal. Patient data gets leaked.

Jona Boeddinghaus: [00:25:23] Yeah, indeed. And also on top with this angle and GDPR, which is very strict, as you know. And yeah, I mean the first instinct so to say of course with these projects is to use a system like ChatGPT because it’s, it’s so powerful. I mean, ChatGPT four is really widely successful in so many use cases, including health care and medical texts. But obviously you can’t do this or it’s not a good idea, at least if you take a data privacy and data protection seriously. So indeed, it’s a it’s a technical task. How can we do private or privacy first AI with these large language models? And also, how can we frame this again in this big, bigger context and make sure that you can use AI services? Because I truly believe that AI is becoming more and more a service. It’s like building blocks that you can use and should use. So how can you integrate these blocks without reinventing the wheel and still be compliant with data protection rules? It’s a big challenge, yes.

Jody Ranck: [00:26:28] Did you want to talk at all about the use of differential privacy with this client as well? Or do we want to talk go into more into the issues around the EU AI Act and where what’s going on with that of late?

Jona Boeddinghaus: [00:26:44] Yeah, maybe just just one word on differential privacy is indeed a very interesting topic because that’s a technology, a privacy enhancing technology that we use for over three years now in our projects. It’s becoming a gold standard in anonymization. It’s when we started to use it, it was really unknown, at least in Europe. And we truly believe that it’s a good solution for data privacy because it basically mathematically guarantees that it doesn’t make a difference if one data point. So one patient, for example, is included in one data set or not. So it’s a very, very strong privacy guarantee. So that’s a that’s a great tool. It has limits, especially when it comes to text. So this last large language models are not perfectly compatible with this technology because it really is a statistical method that works on structured data very well. It’s hard to define similar concepts on the domain of of language.

Jody Ranck: [00:27:50] Okay. And Mark, maybe now let’s if we could talk a bit about your engagement with at the policy level and what you’re seeing with the EU AI Act and kind of where you see that going. What are some of the limitations of regulatory compliance discourses? And then I’m curious as a philosopher, then, you know, how do you engage with these clients of Dikis and kind of operationalizing your style of thinking about technology in light of the, you know, going beyond just kind of what’s your take on the regulatory discourse? And then if we could go beyond that as well, to lived experience of data scientists and hospital folks trying to figure all these things out.

Mark Coeckelbergh: [00:28:37] Yeah, yeah. The regulatory discourse that’s that’s connected to that act is a lot about, about risk. And, and that’s generally the right approach. I mean, technology is not necessarily bad or it’s not immediately leading always to to better or even good outcomes. So it’s the ethical discussion needs to be about risk. But it’s hard to to classify systems in terms of low risk, medium risk or high risk because that depends a lot on the actual use and the context in which its use. So, you know, an application like ChatGPT can be can be totally harmless when, when use it for writing a short text, for example. But it can also be in the form of a chat bot, you know, lead to interactions that that actually harm people. So that means that this kind of AI and this kind of technology is very difficult to regulate. And what we don’t know at the moment is how exactly the impact is going to be of the regulation, how judges will interpret things, how companies which which often are ready are anticipating, you know, the the act and and and the very specific rules. What are actually going to do to change their their processes, their their development process, their business processes. And so there’s a lot of unknowns there. And that leads to bigger questions also about, you know, how do we deal when when there’s always this new technologies, new digital technologies, How do we deal with that as regulators? What happens now is was, for example, part of this high level expert group on AI from the European Commission, and we started to talk about these issues in in 2018 with a report produced 2019.

Mark Coeckelbergh: [00:30:39] That’s years ago. Right. And the the consequences of the act will be in 1 or 2 years totally visible. So that’s a lot of time between, on the one hand, ethical principles and a more general framework and strategy to very concrete regulation. And there the impact of that. So we need to think, for example, about how, you know, how to deal with this difference in speed between the technological development and regulation. And if we can find more flexible ways of responding to new technologies and their specific uses. So mean, even if we now know a lot about the technologies, we see that that there are always new uses coming up. For example, in the medical context, there will likely be in the next years new uses, but the regulation might not necessarily cater for that. So these are these are real challenges, I think, from the regulatory point of view.

Jody Ranck: [00:31:37] So if I’m a hospital CEO and I’m looking at all these AI vendors out there and let’s say a year or two from now, we have some kind of certifications for responsible AI and they all have the seal of approval on and so forth. And I have my in house data scientist who will be procuring these AI tools from different vendors and so forth. And I’m the CEO and I’m, you know, my head spinning with all that. There’s the regulatory discourse, there’s the ethics and responsible discourse. And if I call you up, Mark, and want to get some advice on, okay, how do I wrap my head around this? What do I need to do with across my company to reduce the odds that we do harm? And maybe, you know, let’s try to implement and make part of our culture a kind of responsible AI ethos, so to speak, for lack of another term. Is this something you would advise on? Or maybe that’s too big of a chunk of a question? Or how would you engage with that from where you sit?

Mark Coeckelbergh: [00:32:53] Yeah, I think there there are various ways to support that kind of worry because I do think that these CEOs and the technical people working in that specific setting shouldn’t be left alone with those questions. Right? So so I think there are two kind of ways to help there. One way is internally there are already ethics committees in in hospitals in all kind of health care settings. I think they need to be enriched with people who know about ethics and know also about the ethics of AI from the outside. I think it’s important from the regulatory perspective to also offer tools, offer more concrete ways of doing things. And there I think companies like ours, like Nike, can really help also to to translate from the general principles and rules to them, actual processes at the level of, you know, teams that develop AI organizations that that need to you know, develop new processes. And so so there I think there’s a there’s an opportunity for Nike and other initiatives to, you know, to to help people do the right thing. But from a compliance point of view and from from an ethical point of view.

Jody Ranck: [00:34:13] Great. Well, thank you, gentlemen. This has been a great discussion. And before we close, just want to ask one final question. And I’ve really enjoyed the having about an hour with you guys today. It’s been very helpful. But within Chilmark, we have our our own internal book club that we may be sharing our best reads and going forward on. So in this area of responsible AI and to our listeners, definitely read Mark’s books, they’re fabulous and it’s quite an honor to have him here today. But Mark, beyond your own books, what are, let’s say, 2 or 3 of the most important things you’ve read that can help people think about, you know, the really sticky issues that we’re facing today with AI? If you were to recommend 2 or 3 to our listeners, what would they be?

Mark Coeckelbergh: [00:35:07] Yeah, for for the more political side, I would recommend books like Privacy is Power and books like Kate Crawford’s book that that puts in a more sort of material and environmental. Yeah. Atlas. So I think that’s, that’s important for the bigger picture. And then I think, yeah, when it comes to, to the more nitty gritty work that that connects to, to the development and how to integrate ethics into development, I think there’s still, you know, there’s still, um, there’s still some books to be written and some, some work to be done. So obviously within Diki, we’ll, we’ll also work on, on those topics.

Jody Ranck: [00:35:49] I’m working on a book on that in fact, by the way. So on responsible AI in health care. So we should talk. We’ll definitely be talking more, I hope. Perfect.

Mark Coeckelbergh: [00:35:57] And then then we should look at that book, too.

Jona Boeddinghaus: [00:36:01] And Yona and yeah, I mean, then besides your your books, obviously, I mean, what I’m doing in my daily, daily routine or weekly routine is really a lot of papers I enjoy. Recently the papers from Floridi, for example, Luciano Floridi. It’s more like an overview approach, but very helpful because yeah, it’s exactly this topic of like aligning higher level principles to frameworks and to to.

Jody Ranck: [00:36:31] This most recent one on LMS. Is that what you’re referring to?

Jona Boeddinghaus: [00:36:34] Yeah, that’s good. But also the, the older ones where it’s really comparing the framework landscape with the principles and how to, to embed the those into, into practice. These are highly recommended.

Jody Ranck: [00:36:46] Well, in our show notes, we’ll include some links to these and hopefully have you guys back again sometime to talk more. We dive even deeper on our the next time we chat. But on that note, I think we’ll close. And I want to thank both of you for sharing your time today.

Mark Coeckelbergh: [00:37:04] Thank you, Jody.

Jona Boeddinghaus: [00:37:05] Thank you so much.

 

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Content

HIMSS24: Back to Form but Haunted by Change Healthcare

HIMSS24: Back to Form but Haunted by Change Healthcare

Good luck trying to get noticed for anything other than AI or cybersecurity HIMSS24 was the first HIMSS national conference that I will have missed since I first attended in 2012. It felt weird not to be there with all my friends and colleagues, and I certainly missed...

read more
ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

ViVE 2024: Bridging the Health 2.0 – HIMSS Gap

Workforce / capacity issues and AI – and where the two meet – are still the two biggest topics on clinical executives’ minds right now at both ViVE 2024 and HAS24. Probably the first time I’ve seen the same primary focus two years in a row – historically we’ve always seen a new buzzword / hype topic every year…

read more
Powered By MemberPress WooCommerce Plus Integration