For a user without much technical experience using a ready-made gui like Jan.ai with automatic model download and ability to run models with the ggml library on consumer grade hardware like mac M-series chips or cheap GPUs by either Nvidia or AMD is probably a good start.
For a little bit more technically proficient users Ollama is probably a great choice to start to host your own OpenAI-like API for local models. I mostly run gemma2 or small llama 3.1 like models with that.
I was on a holiday in the Cinque Terre in Italy with my wife a few years ago. Because of a rainy day we decided to take a train to Genua and visit some museums. At the maritime museum I randomly met an Italian coworker/coauthor from my research institute in Germany, who was visiting his family in his hometown with his wife.