
Irina Benedyk,
Assistant Professor
CSEE Department
SUNY at Buffalo

Courses Taught at UB
CIE439 Introduction into Transportation Systems Analysis
Introduces engineering and planning principles applicable to all types of transportation systems; concept and methods of transportation network analysis; operation and management of transportation systems; traffic engineering elements; traffic flow theory; highway capacity analysis; design and application of traffic control devices.
Spring 21, Spring 22, Spring 23,
Spring 24, Spring 2025
CIE500-BEN Advances Analysis of Transportation Systems
This course identifies concepts fundamental to the planning, design, operation, and management of transportation systems. It aims to impart a systems perspective to transportation problems: that a transportation system should be viewed as a unified whole to be evaluated within the overall functional, social, economic, and political system of a given region rather than just the local effects of a particular action. This course is concerned only with providing the basic framework onto which many different areas of specialization can be added through later coursework and practical experience. Topics include the framework for transportation systems analysis, analysis of transportation demand, analysis of transportation system performance and supply, traffic assignment and equilibrium, and brief introductions to related methodological areas.
Fall 19, Fall 20, Fall 21, Spring 23, Fall 23
CIE500-BEN Applied Optimization for Intermodal Transportation
This course introduces concepts fundamental to the planning, design, operation, and management of intermodal transportation systems. Topics include the framework for intermodal transportation systems analysis, vessel and port scheduling, facility location problem, public-private partnership, and brief introductions to related methodological areas.
Spring 20
Search Results
6 results found with an empty search
- If you embrace AI in your teaching…
As a professor from the engineering department, I feel that we may have an easier time adapting to the rise of generative AI compared to our colleagues in other areas. This is because we have faced similar paradigm shifts before in previous generations, navigating disruptions that fundamentally changed how we learn and how we work. Calculators came first... We saw it first with the introduction of calculators, which were initially feared as a crutch that would atrophy our mental math skills. Computers came second... Then came computers that could solve quadratic equations, and later complex differential equations, seemingly automating core parts of our curriculum. Despite these advancements, engineering education survived and even became more demanding, pushing us to solve more complex and abstract problems in class. This is because these tools also changed how we work. As will the generative AI. And we will survive it too! With that confidence, I have chosen to fully embrace it in my teaching, reshaping my syllabus not as a defense against AI, but as a roadmap for partnering with it. Below I have listed changes (additions) to all my courses syllabus that I made to accommodate AI tools in my teaching. I have implemented three changes. I have added a new student learning objective. Student Learning Objectives Students will learn and practice ethical and efficient use of Artificial Intelligence (Generative AI) I have added a clause explaining student use of AI tools. Students’ Use of Artificial Intelligence: In this course, you are encouraged to use generative AI tools (ChatGPT, Gemini, etc.) as partners in your learning process. Think of them as powerful assistants that can help you brainstorm, outline, find information, and refine your writing. However, they are tools, not replacements for your own critical thinking. You are the author and ultimately responsible for submitted work. The goal is to use AI to elevate your work, not to generate it for you. · Unless an assignment explicitly prohibits it, you may use AI tools for: (i) Brainstorming and exploring topics, (ii) Developing outlines for your papers and projects, (iii) Checking your grammar and style, (iv) Summarizing complex texts to aid your understanding, (v) Explaining concepts in different ways. · For any assignment where you use an AI tool, you must include a brief "AI Use Declaration" at the end of your submission. This declaration should be a short paragraph that transparently describes: Which AI tool(s) you used (e.g., " I used Gemini... "), How you used it (e.g., "...to brainstorm initial ideas for my essay topic and to help create a first draft of my outline. "), A brief statement on how you reviewed and edited the output (e.g., " I then wrote the essay myself, fact-checked all information, and edited the entire document for originality and clarity. "). This declaration is part of your assignment grade, and it aims to foster transparency and to reflect on your learning process. · If you include a direct, unmodified quote from an AI model (for example, to analyze its response), you must cite it clearly in-text and in your bibliography, just as you would any other source. Example Citation: "Describe the symbolism of the donuts shape in the movie Everything Everywhere All at Once." Prompt to Gemini, Google, 6 July 2025. · Presenting AI-generated text or ideas as your own original work without proper declaration or citation is considered as plagiarism. You are responsible for the accuracy, originality, and critical analysis of everything you submit. The final work should reflect your unique voice and understanding of the material. · Be aware that individual assignments may have specific rules regarding AI use that supersede this general policy. Always read assignment instructions carefully. Failure to adhere to this policy will be considered a violation of the university's academic integrity code and may result in penalties, including failure of the assignment or the course. Finally, I have disclosed my use of the AI to students using this clause. Instructor's Use of Artificial Intelligence: In the spirit of transparency and to model the ethical partnership with technology that this course encourages, I want to be open about my own use of artificial intelligence (AI) in preparing and administering this course. I may use AI tools as a productivity assistant to help with tasks such as: · Generating initial ideas for lecture topics, discussion questions, and in-class activities. · Creating first drafts for presentation slides, assignment prompts, illustrative case studies. · Checking documents for clarity, grammar, and style. · Creating custom images or diagrams to visually explain complex concepts in my lectures. It is critical for you to understand that AI serves as my assistant, not my replacement. Every piece of AI-generated content is thoroughly reviewed, fact-checked, edited, and approved by me to ensure it is accurate, relevant, represents my opinions, and meets the pedagogical goals of our course. The final judgment, core intellectual content, and, most importantly, all grading and feedback on your work are and will always be my own. In addition, here is a Google document with many examples provided by different professors from diverse disciplines.
- Using LLM for grading
In academia, using AI for grading and providing a feedback seems like a reasonable next step. I suspect that AI will manage to grade and provide valid feedback for easy to medium level of difficulty calculation problems. If it can solve it, it can grade it, right? What if the problem is not a calculation assignment? We need to test it! Note, the privacy of my students is a major concern for me (even with the turn off button that instructs LLM not to save the information; as long as something is online, I have low trust that data is not collected). For the purpose of this experiment, I have replaced names of all students. What activity have I used for LLM to grade? One of the activities that students are doing (juniors, enrollment 100-150, course 'Transportation Systems Analysis') is Weekly Group Activities (5 students per team), where students are asked to provide a responses to 5 questions (some are debating, some are knowledge based). Each team provides one page report graded on scale 0-5. In addition, it is required to hold discussions online to enable participation grading. Here is a section from the syllabus/assignment regarding participation grading: *** Chat discussion, graded individually using 0-3 scale, where i. 0 is assigned to students who have not participated; ii. 0.5 per question for student who provided minor contribution (helped editing the question, provided short comment, and/or expression of agreement/disagreement with others ideas); iii. 1 per question for students who provided major contribution (participated in debates for that question, provided justification to their agreement / disagreement toward others ideas) iv. Note, to get full contribution points a student needs to provide major contributions for at least 2 questions and minor contributions for two other questions. *** What do we want to learn? I am curious is LLM is capable to provide reasonable grading and feedback for the chat discussions. This year my enrollment was 103, so each week I had 21 reports and 21 chats. Every week, assigned TA is tasked to look through all chats to capture the participation. It is time consuming, and somewhat subjective. Although, TA did a great job with no complains throughout the semester. In this experiment, I wanted to see: will LLM model be able to do this grading consistently between chats/groups; how LLM grading is comparable with our grading; and how different models perform. My observations I will give an example for one team discussion graded by different models (DeepSeek and free Gemini model) later. Before that, let me provide you with some of my observations from the experience: models more or less demonstrated good performance with extreme cases: when students didn't or barely participated. different models often have similar or the same grade, but their grades were different from one that was given by the TA. It was so common for many groups that I started checking myself if grades from the TA are fair. My investigations supported TA's decisions. DeepSeek demonstrated initiative and often graded student participation as 3.5-4.5 out of 3 with comment ‘ Exceeding Expectations ’. That was funny and surprising! Grading policy and my prompt instructions stated clearly that grade should be between 0 and 3 . However, because there are directions on how to grade contribution to each question, if student participated in more than requested, DeepSeek violated 0-3 scale. Other models I tried followed 0-3 constraint, and stop at 3 even if the student contributed to more questions. It was interesting to see how models prioritized different conditions if they contradict each other. Results of my investigations for differences between LLM and TA’s assessments: When students had little discussions on some question, just 1-2 messages but to the point and at the level of major contribution, we gave them full credits; however, models awarded minor contribution ( 0.5 ) and on rare occasions even no contribution with 0 points. The opposite was also true, when student had a lot of discussions with somewhat relevant text but no substance (discussing wording and formatting for the report, asking clarifications from others, confirming that others has solved the question) we gave no contribution ( 0 ) or minor contribution ( 0.5 ) , but models awarded more, qualifying that discussion as major contribution . This means that models couldn’t ‘grasp’ the meaning of the discussion just counted the number of statements of relevant discussions. So, as of now, TA are only valid grader for this type of the assignments. Models are very close to being able to do it, but not quite yet. Example Here is an example of the analysis from 2 different models, for this team we gave 3 out of 3 to every student , however, models have big disagreements with us for this team demonstrating some of my observations above (I provided detailed report only for one student to make this post readable). Model 1 Student Analysis and Grading: Student P: A.1: Did not contribute. A.2: Provided major contribution by selecting and justifying vehicle attributes. A.3: Provided minor contribution by acknowledging AI's use in spellcheck and understanding. A.4: Did not contribute. A.5: Did not contribute. Summary: Paul contributed to the discussion, mainly focusing on Q2, and provided some general feedback and edits to the final document. Grade: 1.5 (1 for A.2, 0.5 for A.3) Grade Summary: In summary, students Y, A, T, and E all received a 3, and P received a 1.5. Model 2 Here’s the summary and grading of each student’s contributions based on the provided chat transcript and the evaluation metric. I’ve analyzed their participation in the discussion for each question (A.1 to A.5) and assigned scores accordingly. Student P A.1 : No contribution. A.2 : Provided a major contribution (1 point) by agreeing with A’s suggestion for modifying vehicle attributes (tau) and proposing emissionClass and sigma as additional attributes. A.3 : No contribution. A.4 : No contribution. A.5 : No contribution. Total Score : 1/3 Summary : P contributed moderately to A.2 but did not address A.1, A.3, A.4, or A.5. His participation was limited and needs improvement. *** Final Grades A : 4.5/3 (Exceeds expectations) Y : 3.5/3 (Exceeds expectations) T : 3/3 (Meets expectations) E : 2.5/3 (Meets expectations but could improve) P : 1/3 (Needs improvement) You can guess what model was DeepSeek. Concluding Notes Note, that Model 1 for student P captured some contribution for A3 but named it minor , Second Model has not caught that at all. We judged both contributions as major . Both models missed minor contributions for that student when they just clarified something in one-two sentences. This experiment and report was created for Lumen Circle activity during Spring 2025 semester. When you are reading this, this experience will not be replicable due to rapid advancements of LLMs. Hopefully, it will be much better! …and safer. If you want to discuss more, feel free to reach out or to join Lumen Circle. I do recommend!
- Use of LLM Arena for easy to medium level engineering problems
The swift and widespread integration of Large Language Models into virtually every professional sector has created an immediate and critical need for their inclusion within university curricula. We need to incorporate them yesterday. Universities have an obligation to guide students on their path of ethically navigating this new technology, teaching them to critically evaluate AI-generated content, understand its inherent biases, and leverage it as a powerful tool for augmenting human intellect rather than replacing it, and all of that, somehow, without sacrificing the major task: learning critical thinking and problem solving skills in specific area. When the most advanced LLMs are able to solve almost all engineering problems, how can we achieve that ? Let’s share ideas! Idea: Use of LLM Arena for easy to medium level engineering problems. LLM Arena is a crowdsourced online platform where users anonymously vote for the better of two side-by-side large language model responses to the same prompt, generating a continuous leaderboard that ranks models based on human preference. https://lmarena.ai/ I am teaching transportation engineering for juniors, my example will be from that course. Problem from the text book: A four-timing-stage traffic signal has critical lane group flow ratios of 0.225, 0.175, 0.200 and 0.150. If the lost time per timing stage is 5 seconds and a critical intersection v/c of 0.85 is desired, calculate the minimum cycle length and the timing stage effective green times such that the lane group v/c ratios are equalized. Revised problem for individual homework : Using LLM Arena ( https://lmarena.ai/ ) solve this problem 2 times (you will get 4 different solutions) with random models. “A four-timing-stage traffic signal has critical lane group flow ratios of 0.225, 0.175, 0.200 and 0.150. If the lost time per timing stage is 5 seconds and a critical intersection v/c of 0.85 is desired, calculate the minimum cycle length and the timing stage effective green times such that the lane group v/c ratios are equalized.” Note: you may need to copy this problem to text editor before copying into arena text field. For every ‘battle’ provide: Screenshots of obtained answers before and after your judgement (first screenshot will have Assistant A/B instead of model names and second will have model names. Report what models participated and what was your judgement. Justification for your judgement with at least one reason. For every solution, assume that you are a teacher who is grading students’ work and provide: A solution screenshot with the name of the model. A grade using scale 0-10, where 0: no solution provided, and 10: perfect solution. Make sure that your grading is fair and consistent between ‘students’. If your grade is less than 10, explain all mistakes made by the ‘student’. The feedback to the ‘student’. This is how LLM Arena interface looks like as of July 2025: LLM Arena Interface (July 2025) Why an easy to medium level problem? As of July 2025, most of these problems CAN be solved by advanced LLMs without any additional materials. However, difficult problems that require merging together multiple concepts may not be solvable yet with a zero-shot approach. Why solving it at least twice? Some 'junior' models may not be able to solve even easy problems, see picture below: LLM results before user judgement (no names of the models). Right screen - no solution is provided, left screen - correct solution is provided. You can ask students to run another battle if this happens, or ask them to solve 3-4 times to get different outputs, that include no solutions, wrong solutions , and correct solutions . In the picture below, left solution is wrong, while right solution is correct. LLM Arena results after user finished judgement (left screen is highlighted with green, names of the models are visible). Right screen - wrong solution is provided, left screen - correct solution is provided. Example of the problem considered here is an easy problem that every student can solve correctly, usually from the first attempt. If more advanced problems are used (Gemini, ChatGPT, etc.), more likely students will get the correct answer from the first attempt, limiting the intensives to look into solution at all. However, when using LLM Arena, we have got: One no solution , One wrong solution , Two correct solutions . This allows for more engaging activity for the student. Why ask to grade and to provide feedback? The grading process with discussion on errors force students to look deep into the solutions (however easy the problem is) multiple times, providing necessary training. Feedback writing trains the ability to communicate engineering problems. Another bonus, there is no need to generate different numbers to avoid ‘cheating’ as LLMs will provide necessary randomness to guarantee that students will be working with somewhat different problems. Sounds like win-win-win! Right? Final thoughts: Is this feasible for large classes? The only issue is how to grade such HW for 150 students? We do need new tools in BrightSpace. Before LLM, this assignment was a self-graded calculation problem with randomly generated numbers and unified feedback with easy solution steps. However, if a problem is formulated in the way proposed here, the grading becomes a time-consuming task that require further discussion. Especially in a light of intensive discussion on how teachers are the first to be replaced by the AI. What do you think?
- Re-engineering the Research Paper for the AI Era
The academic engineering paper, a cornerstone of scientific discourse, is in the midst of a profound, albeit quiet, identity crisis. Its primary author is increasingly an AI, and its most diligent reader is also an AI. Large Language Models (LLMs) now assist in drafting everything from literature reviews to methodology sections, while other algorithms parse, summarize, and categorize these papers for databases and discovery tools. In this new ecosystem, the traditional paper format, designed for human contemplation, has become an inefficient bottleneck. We are, in essence, forcing AIs to write verbose prose for other AIs to then painstakingly deconstruct. The time has come to redesign the research paper to be what it is now becoming: a machine-readable, data-first document. Should you enjoy the writing? (Created with Gemini) With that, the current model seems extremely wasteful, especially when environmental costs of training and using LLMs come to mind. It encourages "algorithmic bloat", long-winded introductions, meandering literature reviews, and detailed descriptions of standard procedures, that serves neither human nor machine. A human researcher skims for the core contribution, while an AI reader must sift through semantic filler to be able to extract the same key information. This existing model inadvertently created (already) a culture of "citation race," where publications resemble a self-referential game of citation counts rather than a genuine pursuit of knowledge. Researchers often cite their own previous work or tangentially related papers, inflating bibliography length without adding substantive context. This practice, combined with the publication of "minimal publishable units" often designed to train students or secure grants, clutters the academic record with articles that offer little novel insight. Furthermore, the traditional prose-heavy structure, coupled with a lack of accessible data and the inability to readily verify cited sources (often due to paywalls), makes it exceedingly difficult for human reviewers to detect logical fallacies or unsubstantiated claims. Here and now, we have a choice: either continue following these inefficient procedures to report research progress, or rethink that process, for example by looking into the conciseness of medical research, that prioritizes clarity, efficiency, and, as a result, machine-parsability. A. The Introduction is the Contribution The lengthy background and historical context should be eliminated unless they are foundational to the paper’s novel claim. The paper should begin immediately with a clear, structured statement of its contributions. The "why" should be self-evident from the problem being solved. S. Context via Structure, Not Prose. The sprawling literature review is obsolete. It should be replaced by a structured table of prior work and existing gaps. This table can list key preceding works, their methods, their limitations, and precisely how the current paper differs or improves upon them. This is more direct for a human reader and immediately parsable for an AI. We all create such tables while doing the research, but when we start writing the paper, we revert to prose. H. Methodology as a Pointer Describing a well-established process in detail is redundant. If the methodology is not a core contribution, it should be reduced to a methodology names and citations. For example: Methodology: We employed a standard Adam algorithm (Diederik P. Kingma, Jimmy Lei Ba, 2014) with and standard decay rates (β1=0.9, β2=0.999) and a learning rate of 1×10−4. I. Data and Code as First-Class Citizens To ensure true transparency and reproducibility, the paper must treat its data and code as core components, not as optional supplements. All datasets, models, and analysis scripts must be made available in persistent, publicly accessible repositories (e.g., Zenodo, Figshare, GitHub). These must be linked directly within the paper. This is a non-negotiable component. A claim without accessible data is an unsubstantiated assertion. This allows any reader, human or AI, to immediately access and interrogate the evidence, transforming the paper from a static report into a dynamic, verifiable research object. F. Explicit, Parsable Contribution Sections The paper's core innovations must be explicitly declared in a dedicated, machine-readable section using simple tags or a structured data block (e.g., JSON-LD): We introduce a new benchmark dataset of 5,000 annotated synthetic images of brain scans. We propose a novel behavioral mechanism, "Cognitive Restructuring," that reduces driving anxiety by 36%. T. Short, Data-Driven Results and Conclusions The results section should be a direct presentation of data: figures, tables, and key metrics. The conclusion should be a bulleted list summarizing the findings and their implications, mirroring the claims made in the contribution section. The Role of the Human Reviewer in the AI Era This shift logically raises a critical question: How can such a condensed, hyper-specialized paper be reviewed? The answer is not to eliminate human oversight, but to empower it. Human review remains the most critical defense against fraud, ethical abuse, and the injection of pseudoscience into the scientific record. Citation or not citation? (Created with Gemini) In this new model, the reviewer’s role becomes that of an expert arbiter of integrity and logic, facilitated by AI. Upon receiving a paper, the reviewer gains access to a suite of tools. One tool, using the paper’s structured citations, generates a traditional, narrative-style introduction and literature review on demand, providing the necessary context for reviewers who are not directly involved in the area explored by the authors. Crucially, with mandatory data and code linking, the reviewer can also deploy automated analytical and replication tools. These tools can run directly on the provided dataset to independently replicate key figures, perform standardized statistical checks, and flag anomalies or inconsistencies in the data. This empowers the reviewer to move beyond trusting the author’s presentation to actively verifying the findings. Freed from parsing prose and empowered with analytical tools, the human reviewer can focus on higher-order tasks: Verifying Integrity - Do the results presented logically follow from the methodology? Do the findings from the replication tools match the author's claims? Assessing Soundness - Is there any evidence of data manipulation, p-hacking, or other forms of academic dishonesty flagged by the analysis? Evaluating Significance - Does the contribution, now clearly isolated and verified, represent a meaningful advance over the prior art? Ethical Oversight - Are there any foreseeable ethical implications or potential misuses of the technology that need to be addressed? This "human-in-the-loop" model preserves rigorous, critical oversight while adapting to the realities of an AI-driven research landscape. Important, described approach is not something new. This is how research papers are structured and reviewed in some discipline, and, how they were envisioned to be. With rapid AI proliferation in our life, we have a chance to fix current bugs and level up the process all together. Will we? The first question we must ask ourselves is: Is current system of writing based on tradition is corrupted to the degree that it is impeding new discovery? The second question is: Is AI ruining academic writing as in its traditional form? ?Have you answered YES to both questions...
- Empowering the Reviewer: Essential AI Tools for the New Academic Landscape
The peer review process is already a significant, often uncompensated, service to the scientific community, demanding deep expertise and considerable time, which makes performing it both quickly and accurately a formidable challenge. Surprisingly, AI proliferation in academic writing doesn't make it easier, while rapidly ruining academic writing. Will academic writing be reborn? We will see! Let’s assume it will. The possible shift towards concise, data-driven research papers will necessitate a parallel evolution in the reviewer's toolkit. To effectively assess the integrity and validity of AI-influenced manuscripts, reviewers will increasingly rely on AI-powered tools themselves. Here are three crucial categories of such tools: 1. Contextualization and Literature Synthesis Assistants Given the possible absence of lengthy introductions and literature reviews in the future paper format, reviewers will benefit immensely from AI tools that can rapidly generate context. These tools would take the paper's structured citation table and, on demand, synthesize a concise yet comprehensive overview of the relevant prior work and the research gap the paper aims to fill. Functionality would include: Analyzing the provided citations to identify key themes, influential papers, and the evolutionary trajectory of the research area. Highlighting how the cited works relate to each other and to the current paper's stated contributions. Explicitly outlining the novel aspects of the submitted work in relation to the existing body of knowledge. This allows reviewers, even those not deeply embedded in the paper's specific niche, to quickly grasp the significance and novelty of the research without relying solely on the authors' potentially biased framing and provide additional services to check the rationality of all citation inclusion efficiently. The last one is almost impossible to do at present. 2. Data Integrity and Reproducibility Verification Platforms With the mandatory inclusion of data and code links, AI-powered platforms will be essential for reviewers to independently verify the presented findings. These tools would offer functionalities such as: Running the provided scripts on the linked datasets to reproduce the key results (figures, tables, metrics) presented in the paper. Identifying potential issues in the data or analysis, such as unusual distributions, outliers, or signs of p-hacking. Analyzing the code for adherence to best practices and identifying potential flaws in the experimental design or implementation. By automating these checks, reviewers can dedicate their expertise to interpreting the results and assessing their broader implications, rather than spending time on manual verification. 3. Contribution Validation and Claim Consistency Analyzers With the emphasis on clearly defined contribution sections, AI tools can assist reviewers in evaluating the validity and consistency of these claims throughout the paper. Functionality would include: Identifying all explicit and implicit claims made in the contribution, results, and conclusion sections. Ensuring that each claim is directly and adequately supported by the presented data and analysis. Identifying any contradictions or logical fallacies within the paper's arguments and between its different sections. These tools will help reviewers ensure that the concise paper delivers on its promises and that the stated contributions are genuinely substantiated by the research presented. Embracing these AI-powered tools will be crucial for maintaining rigor and trust in the evolving landscape of academic publishing. The question is: What did I miss?
- Reflections on My Internship Experience by Gongda Yu
Choosing to undertake an internship during summer 2024 as a Ph.D. student was a decision that filled these summer three months with unexpected joy and growth. Stepping into a role that I had never experienced before, surrounded by a team of like-minded, passionate individuals, I found myself on a journey of self-discovery. This experience has not only helped me uncover what I truly love to do but also revealed my strengths in ways I hadn't anticipated. My manager and Ph.D. advisor provided invaluable guidance throughout this journey, shaping my path and encouraging me to explore my potential. This internship opened my eyes to the gap between the academic knowledge gained in the classroom and the practical challenges faced in the real world. Working on actual projects, I had the chance to interact directly with clients — an experience that taught me the art of communication. I learned how to express our team’s needs and capabilities with clarity and tact and to respond to clients' questions thoughtfully. This process pushed me to grow, not just as an engineer, but as a communicator and a professional who understands the importance of empathy and patience in every interaction. One of the most profound experiences was visiting project sites. Standing there, witnessing DOT projects and construction in action, I could feel the tangible impact of our work. The concepts I once studied in textbooks were now a reality before my eyes. It was a deeply meaningful experience to see theory transformed into practice, and it reminded me why I chose this field in the first place. As my internship drew to a close, writing my farewell emails to everyone I had worked with was unexpectedly emotional. Each goodbye felt like a moment of reflection at a crossroads in life—a realization that our paths might not cross again soon, if ever. It was a poignant moment, realizing that sometimes you have to silently wish people well and hope they have a good life. This feeling of closure, mixed with uncertainty about the future, was both humbling and profound. Throughout the internship, I took the opportunity to schedule numerous coffee chats with colleagues from various departments. I encourage everyone to do so, either during the internship or during your first month at any job. These conversations were incredibly enriching. Each person shared their stories, insights, and career advice, which became precious nuggets of wisdom for me. I came to understand that what I thought I was meant to do, based on my academic background, wasn't necessarily what would make me happiest or most fulfilled. The true lesson I learned is that our work should resonate with our passions, something we’re eager to pour our time and energy into — not just a task we endure until the end of the day. For me, life has always been about embracing new experiences and taking on different challenges. This belief has led me to try various jobs and live in different countries, and I feel this is where the true essence of life lies. This internship has reaffirmed that belief, showing me that life is about exploring, learning, and discovering what truly makes us come alive. In these three months, I have experienced so much more than I anticipated. I have grown professionally and personally, gaining a deeper understanding of who I am and what I want to pursue. This experience has taught me that the right path is not always the one that seems most obvious or logical, but the one that feels most true to who we are. As I move forward, I carry these lessons with me, grateful for every moment of this incredible journey. Best Gongda