Artificial Intelligence (AI) is booming and so does also its energy footprint. The exponentially increasing scale of Deep Learning (DL) models comes at the cost of high computational power and energy requirements, with the daily power consumed by highly acclaimed Large Language Models (LLMs) like ChatGPT-3 being already at 0.5 GWh. With current projections forecasting that the computational power requirements will double every 5-6 months, the way-to-the-rescue is being sought in innovative electronic AI hardware architectures, including neuromorphic and analog-in-Memory Computing (AiMC). Sculpturing, however, an AI hardware roadmap around electronics implies inherent energy constraints due to the speed and power limits of the electronic interconnects inside the circuits. Meeting the compute power requirements of next-generation AI applications without yielding an energy boom necessitates probably nothing less than a “tectonic shift” in the underlying computing hardware. This deadlock has driven the emergence of neuromorphic photonics that were theoretically predicted to allow for orders of magnitude improvements in energy and size efficiencies compared to electronic AI platforms, with the predictions relying on solid scientific principles and assumptions. Delving, however, deeper in the performance metrics of recent neuromorphic photonic demonstrations reveals a significant discrepancy between projected and achieved efficiencies: instead of the 10s of fJ/MAC expectations, computations with light can still not be offered at energy levels lower than a few pJ/MAC, the main reasons being: i) The use of sub-optimal architectures and technologies and ii) The use of rigid DL architectural and training models that are not optimally aligned along the idiosyncrasy of photonic computational settings. This is where HAETAE steps in, aiming to overcome these shortcomings and release a versatile and energy-efficient photonic AI processor that can turn the promise of neuromorphic photonics into a tangible reality.