top of page

Courses Taught at UB

CIE439 Introduction into Transportation Systems Analysis

Introduces engineering and planning principles applicable to all types of transportation systems; concept and methods of transportation network analysis; operation and management of transportation systems; traffic engineering elements; traffic flow theory; highway capacity analysis; design and application of traffic control devices.

Spring 21, Spring 22, Spring 23,

Spring 24, Spring 2025

CIE500-BEN Advances Analysis of Transportation Systems

This course identifies concepts fundamental to the planning, design, operation, and management of transportation systems. It aims to impart a systems perspective to transportation problems: that a transportation system should be viewed as a unified whole to be evaluated within the overall functional, social, economic, and political system of a given region rather than just the local effects of a particular action. This course is concerned only with providing the basic framework onto which many different areas of specialization can be added through later coursework and practical experience. Topics include the framework for transportation systems analysis, analysis of transportation demand, analysis of transportation system performance and supply, traffic assignment and equilibrium, and brief introductions to related methodological areas.

Fall 19, Fall 20, Fall 21, Spring 23, Fall 23

CIE500-BEN Applied Optimization for Intermodal Transportation

This course introduces concepts fundamental to the planning, design, operation, and management of intermodal transportation systems. Topics include the framework for intermodal transportation systems analysis, vessel and port scheduling, facility location problem, public-private partnership, and brief introductions to related methodological areas.

Spring 20

Search Results

10 results found with an empty search

  • Instead of a Lecture on Self-Driving Vehicle Impacts, Do This!

    If you are planning to give the lecture on the societal impacts of self-driving vehicles, resist the urge to just talk at  your students. I understand that you feel the responsibility to share everything you know. I was there too . For many years I gave a lecture for different audiences constantly updating materials and improving my delivery. Only this year I have decided to try something new, and I have not regretted it! The future of transportation is complex, controversial, and best understood through simulation. I have developed a 1.5-2-hour structured activity that turns the classroom into a city planning task force. Instead of passively listening, students work in small teams (2-3 people) to advise a City Council on a future where 50% of vehicles are fully autonomous by 2045. Here is how to set it up, the tools you need, and how to adapt it for your specific students. The Setup Time:  90-120 Minutes Tech:  Use Mural.com , Miro.co m , or Lucid.co for virtual collaboration. If you are in person, large sticker boards or chart paper work perfectly. You also will need a projector for the initial presentation. Grouping:  Divide students into "consulting firms" of 2 or 3 members. Small groups ensure everyone contributes. The Twist: 5 Distinct Community Scenarios To make this rigorous, don't let every group analyze the same generic city. Assign each team one of these five distinct community profiles. This forces them to consider context—what works for a suburb might destroy a walkable downtown. Here are some examples to use: The Sovereign Tribal Nation:  A rural community with sovereign governance. Challenge:  Balancing modern connectivity with cultural preservation and land rights. How do AVs impact traditional land use? The High-Income Suburban Sparse Community:  Large lots, gated neighborhoods, and high car dependency. Challenge:  Addressing privacy concerns and the potential for increased urban sprawl/commute distances. The Walkable Mixed-Residential Community:  A dense "15-minute city" where residents walk or bike to shops. Challenge:  Ensuring AVs don't degrade the pedestrian experience or safety. Can AVs replace parking lots to create green space? The College Town:  A place where thousands of students walk, bike, and rush to class, often late at night. Challenge :  If students can't park cars on campus anymore, how do commuters get to class? And how do self-driving cars handle thousands of students jaywalking while looking at their phones? The Historic Tourist Center:  Narrow, cobblestone streets with high foot traffic. Challenge:  Navigating modern technology in preserved infrastructure without ruining the aesthetic or tourist economy. The Activity Flow (90-120 Minutes) Part 1: The Foundation (30 Minutes) ( 10 minutes ) Start by defining the levels of autonomy (Level 0 to Level 5). This can be reduced by introducing only fully autonomous cars (Level 5). The best way to highlight the capabilities is to showcase currently available demonstrations (check at youtube Waymo , Zoox , Nuro ).  ( 5 minutes ) Task : Ask teams create a T-Chart. On one side, they list the "Pro-Self-Driving" arguments; on the other, the "Anti-Self-Driving" arguments. Tech Tip:  Encourage them to use LLMs to brainstorm unexpected arguments. If you have only few teams (2-6), you can use one T-Chart and ask everyone to contribute to it. If more students are participating (7 and more), consider making one T-Chart for every 5-6 teams depending on room set up.  Title for T-Chart : Self-Driving vehicles will impact society (major title) … positively by (left subside)…negatively by (right subtitle). Add one positive and negative to the central (if you have many) T-Chart if you think the audience will need some help. For example: …by allowing me to go where I want without the need to learn to drive. …by removing jobs that deal with driving education. T-Chart using Mural.com ( 15 minutes ) Finally, discuss some (or all) provided arguments. Some will repeat each other. It is a good idea to walk between teams to see the progress and note what arguments are produced. Part 2: Concept Mapping (10-20 Minutes) This is where they see the bigger picture. Ask students to draw a concept map connecting AVs to "external" systems like Urban Planning, Public Transportation, The Economy, and Social Equity.  ( 10-20 minutes ) Task: Each team works with a randomly assigned community to create a concept mapping. Prompt for general audience:  It is 2045. You are advising the City Council. Explore your community using provided infographics and identify how relationships to external systems will change when 50% of cars are Level 5 / Self-driving. Start your sentences as "We believe AVs will decrease public transit funding because..." Systems to consider: Urban Planning Public Transportation The Local Economy Social Equity Prompt for younger audience:  It is 2045. You are youth consultants advising the City Council what the next generation  needs. Explore your community using provided infographics and identify how relationships to external systems will change when 50% of cars are Level 5 / Self-driving. Start your sentences as "We believe AVs will decrease public transit funding because..." Systems to consider: The "First Job" Market (The Economy) The "Friday Night" & Social Life (Social Equity/Culture) School Commute & Safety (Public Transportation) Parking Lots vs. Parks (Urban Planning) The "Privacy" (Law & Ethics) The Cost of a Ride (Social Equity) Tip : Walk between teams, ask them to imagine themselves as a part of this community, try to motivate them to create a vivid description of life in such a community. Example of concept map created by students using Mural.com Concept mapping should be also created on the sticker boards or any used online board. Depending on available time, ask teams to add their responses to all of limited number of external systems (1-2-3). This is another way, the length of this section can vary.  Preparation  (optional):  Put (printed) picture with community infographics in the middle of the board and draw arrows with external systems names on them.    Part 3: The Scenario & Policy Pitch (25-35 Minutes) This is the synthesis phase. Students move from analyzing relationships to creative problem-solving. They must prioritize the issues they found and propose a concrete solution. ( 15 minutes ) Task : Based on their concept map and community profile, teams must identify the top one/two opportunities and top one/two risks. Then, they must develop one specific policy recommendation to address one or some of them. Finally, they present a "Brief." Prompt for General Audience:  "You are the special task force advising the City Council on the 2045 transition. Based on your analysis, identify the top one/two opportunities and top one/two risks for this specific community. After that, draft a Policy Brief (1 Slide). Your policy must explicitly maximize one of the opportunities or minimize one of the risks you identified. You will have 1(3) minutes to present your brief to the Council."  Prompt for Younger Audience : "As Youth Consultants, you need to give the City Council a clear set of rules. Look at your map: What is the best thing that could happen (opportunity) and the scariest thing that could happen (risk) when self-driving vehicles dominate the car flow? Choose ONE of those. Create a 'Future Law' proposal to make sure the good thing happens—or the scary thing doesn't. Create a campaign poster (or slide) explaining your law. Example:  If the risk is 'No Summer Jobs,' your Law might be: 'Robots cannot deliver pizza; only humans can.'" Tips : For the Pitch : Encourage students to use NapkinAI for one slide ideas. Have an example prepared to show them, how it helped to generate visual.  For the Policy : Push students to be specific. If they say "Make it safe," ask "How? Speed limits? dedicated lanes? sensors on street corners?" Time Management : Be strict with the "1(3)-minute" presentation limit to simulate a real council hearing environment. ( 10-20 minutes ) Presentation Task : one representative from the team gives a 1(3) minute presentation on proposed police. Consider asking the audience to vote if they are convinced to implement it or not. Choose presentation time based on the number of teams. Based on my experience, 1 minute is more than enough.  Example of policy slide created by students using NapkinAI Preparation  (Optional): Provide a "Policy Brief Template" on the board with three boxes: The Challenge:  (Which Risk/Opportunity did you pick?) The Solution:  (What is your policy/law?) The Proof:  (Why will this work? Reference your concept map). If using NapkinAI, prepare a challenge/solution/proof chain and create visuals in front of the student, briefly walk them through the provided ideas to show the thinking behind the visualization selection. If using Mural/Miro, add a voting area so the class can vote with which "Consulting Firm" they agree. Adapt It For Different Audience For High School Students: Focus:  Personal impact and safety. Scaffolding:  Give them a pre-filled list of stakeholders that live in assigned areas (e.g., "The Soccer Mom," "The Truck Driver," "The Mayor") to help them roleplay. For College/University Students: Focus:  Systems thinking, equity, and economic modeling. Challenge:  Require them to back their policy with a specific funding mechanism (e.g., "How will the city pay for the infrastructure upgrades required for AVs?"). Tech:  Encourage the use of tools like NapkinLM  to generate visual aids for their final briefs. Let me know how it goes and good luck! ****** Examples of Community Infographics Generated with Gemini are presented below. You can use this prompt as a starting point to create your own community description: ***A vertical infographic flyer in a friendly, colorful cartoon style. Top section: A vibrant scene of a "Sovereign Tribal Nation" community with an open-air market having colorful stalls, set against wide open landscapes and mountains. Show a mix of traditional indigenous cultural buildings and small modern houses. Show children playing outside and people walking on dirt trails. Show a road with a modern car and a person on a horse side-by-side. Bottom section: A clean layout for text statistics with icons. Icons should represent "No Signal/WiFi," "Gravel Roads," "School," "Church," and "Shuttle Bus." The overall vibe is sunny, community-focused, and rural. Text Content for the Flyer THE SOVEREIGN TRIBAL NATION Where Tradition Meets the Horizon Community Profile Population: 8,000 (Spread over 400 sq. miles) Median Income: $38,000 Connectivity:  Low. 40% of the territory lacks high-speed 5G/Internet. Governance: Sovereign. (State laws do not automatically apply). Mobility & Life Getting Around: High dependence on Cars & Horses (Vast distances to connect to the outside world). Roads: 30% are Unpaved/Gravel (Challenging for standard robots). Public Transit: Limited Shuttle (Runs only twice a day). The Vibe: Highly walkable village core. Children play outside safely. extensive network of walking trails. Key Landmarks 1 School | 1 Post Office | Local Library | 1 Church | Tourist Center*** *** A vertical infographic flyer in a friendly, colorful cartoon style. Top section: A bustling university campus scene mixing historic brick clock towers with sleek modern glass dorms. Show crowds of students wearing backpacks crossing the street; many are looking down at their smartphones (distracted). Show electric scooters scattered everywhere on the sidewalks ("scooter clutter"). Bottom section: A clean layout for text statistics with icons. Icons should represent "Graduation Cap," "Young Adult (18-24)," "No Parking Sign," "Electric Scooter," and "Smartphone with Warning/Exclamation Mark." The overall vibe is chaotic, youthful, and busy. Text Content for the Flyer THE COLLEGE TOWN The University Zone Community Profile Population: 30,000 (Seasonal/Fluctuating). Demographics: 70% are aged 18–24 (Gen Z / Alpha). Economy: Dominated by University employment and service jobs (late-night food/retail). Mobility & Life Parking Crisis: Zero Availability. Students park illegally or miles away. Micro-Mobility: High Usage of e-scooters and skateboards (often abandoned on sidewalks). Safety Issue: "Distracted Walking" (Students crossing streets while looking at phones) is the number one cause of accidents. *** *** A vertical infographic flyer in a friendly, colorful cartoon style. Top section: A lively urban street scene with 4-story brick apartment buildings. The ground floors have colorful cafes and flower shops with awnings. Show a dedicated green bike lane filled with cyclists and scooters. Show parents with strollers and young professionals walking on wide sidewalks. There are very few cars, but show a subway/metro entrance. Bottom section: A clean layout for text statistics with icons. Icons should represent "Apartment Building," "Bicycle," "Metro Train," "Parking Garage with a Red X," and "Caution/Collision." The overall vibe is energetic, dense, and hip. Text Content for the Flyer THE WALKABLE MIXED-USE HUB The "15-Minute City" Lifestyle Community Profile Population: 25,000 (High Density). Demographics: Young professionals and young families. Space Problem:  25% of all land is currently wasted on street parking and concrete garages. Mobility & Life Car Ownership:  Low. (40% of residents do not own a car). Primary Transport: Walking, Biking, and Metro. Safety:  High Risk. Frequent pedestrian accidents caused by human drivers failing to yield. *** *** A vertical infographic flyer in a friendly, colorful cartoon style. Top section: A charming "Old World" historic district. Show very narrow cobblestone streets lined with old brick souvenir shops and historic lanterns. Show a horse-drawn carriage sharing the tight road with a large, stuck tour bus. Crowds of tourists with cameras are walking everywhere in the street (no sidewalks). Bottom section: A clean layout for text statistics with icons. Icons should represent "Camera/Tourist," "Horse head," "Traffic Jam," "No Cell Tower symbol," and "Old Scroll/History." The overall vibe is beautiful, historic, but very crowded and tight. Text Content for the Flyer THE HISTORIC TOURIST CENTER Preserving the Past, Surviving the Crowds Community Profile Population: 5,000 Residents ...but 20,000 Visitors every single day. Economy: 85% Tourism-based. (If the tourists stop coming, the town dies). Infrastructure: Roads built in the 18th century (Too narrow for modern buses). Mobility & Life Traffic: Severe Congestion caused by massive tour buses and delivery trucks trying to squeeze through tiny streets. Pedestrian Zone: Heavy foot traffic mixed with vehicles (dangerous and chaotic). Aesthetic Code: Strict Laws against "ugly" modern technology (No visible cell towers, neon signs, or modern boxes). *** *** A vertical infographic flyer in a friendly, colorful cartoon style. Top section: A bird's-eye view of a wealthy suburban neighborhood. Show winding cul-de-sacs with large, identical mansions having 2-car garages and perfect green lawns. Show a fancy "Gated Community" entrance with a security booth. There are no sidewalks, only smooth black asphalt roads with luxury SUVs driving on them. Bottom section: A clean layout for text statistics with icons. Icons should represent "High Income/Money Bag," "Stopwatch (Long Commute)," "3 Cars in a row," "No Bus sign," and "Privacy Fence." The overall vibe is exclusive, quiet, and very spacious.  Text Content for the Flyer THE HIGH-INCOME SUBURBAN SPARK Privacy, Comfort, and the Long Commute    Community Profile Population: 15,000 Median Income:  $140,000 (High Wealth). Housing: 100% Single Family Homes (Low Density). Commute:  45+ Minutes to the city center (High "Windshield Time").  Mobility & Life Vehicles: 2.5 per Household (Teens and parents all have their own cars). Public Transit:  0% Usage (No bus stops exist here; residents view buses as "unsafe" or "unnecessary"). Walkability Score:  10/100 (No sidewalks. Roads are designed for vehicles only).***

  • If You are Doing a Long (1.5-3 Hours) Lecture or Activity, don't Forget to Do This!

    Making your Listeners Active Subjects Using Pre- and Post-Surveys Long lectures always face a universal challenge: maintaining engagement . While breakout sessions and Q&As help, Today I want to discuss another powerful underutilized tool that turns the audience into the subject of their own experiment: The Pre- and Post-Lecture Survey . By asking a specific set of identical questions before and after the session, you transform a standard presentation into a real-time Human Factors experiment . This demonstrates to your listeners how information changes perception, biases, and confidence. Case Study: The Autonomous Vehicle Lecture To illustrate how this works, let’s look at a lecture on Self-Driving Cars. The goal is to track how technical knowledge and ethical framing impact public trust. Here is the 4-question framework used to measure the change in opinion after your "intervention" in the room. 1. The Control Variable (Optional, for some topics it may not be needed) Q1: Are you an active driver? The Logic:  This is your baseline. It should not change during the lecture. In addition, it allows you to segment the data later—do active drivers trust automation less than non-drivers? 2. The Psychological Variable Q2: How important is it for you to be the driver when you are riding in a car with other capable drivers? The Logic:  This measures the " Locus of Control ." While this is a personality trait, it can shift slightly based on the lecture content. It is a critical predictor for Question 4—people who crave control usually reject automation regardless of safety statistics. 3. The Dunning-Kruger Check (Optional) Q3: How familiar are you with self-driving cars? The Logic:  Most educators want this number to go up. However, a fascinating outcome is when self-reported familiarity decreases  after the lecture. The Insight:  When students realize the immense complexity of LiDAR, sensor fusion, and edge cases, machine vision technologies, implications for insurance, etc. they often realize how little they actually knew. This " Socratic Ignorance " can be a powerful motivator for students to enter the field you are presenting to solve these complex problems. 4. The Key Metric (If you want to have only one question, it should be this one) Q4: If it were affordable and available, I would use a self-driving vehicle (Rate your agreement with the statement). The Logic:  This is your dependent variable. Does learning how the technology works increase desire (demystification) or decrease it (awareness of risk)? The "Intervention" Effect Framing Matters This method allows you to demonstrate how specific modules of your lecture influence the opinions. You may want to stress it with some impactful activities / discussion. For example, in the self-driving lecture / activity, you might introduce the " Moral Machine " project, a discussion on how AI handles ethical dilemmas (like the Trolley Problem). The Result:  I often find that after discussing the Moral Machine (even if I am emphasizing that it is a social experiment, not an actual algorithm on how self-driving vehicles make a decision), trust in self-driving vehicles (Q4) drops a lot . The Lesson:  This teaches students about Framing Effects . I try to have 3-4 minutes in the end to discuss this effect after demonstrating a drop in their own opinions. It also helps me to show that technical capability isn't the only factor in new technology adoption, ethical ambiguity scares people away too. Why You Should Use It? Implementing this tools achieves three distinct pedagogical goals: It Demonstrates the Power of " Framing " By introducing a variable like the "Moral Machine" or "Hallucination Risk" midway through, you can show how the context  of information changes the reception  of technology. When students see their own trust scores drop after an ethical discussion, they learn a critical lesson: engineering success isn't just about code; it’s about how that code is presented and perceived by society. It Primes the Brain for Learning Asking questions before  the content is delivered triggers a cognitive process known as " Priming ."  By forcing students to commit to an answer early, their brains subconsciously "hunt" for the evidence during the lecture to validate or correct their initial stance. They stop being passive listeners and become active investigators. It Personalizes Abstract Theory Grounding presenting materials in audience personal lives is an example of evidence-based best practices in education. By asking students to answer questions about their own safety, choices, and wallets (e.g., "Would you  ride in this car?"), you instantly bridge the gap between abstract theory and personal relevance, making audience care about the material. It Sparks Interest in Human Factors By analyzing why Q2 (need for control) correlates with Q4 (refusal to use), and by reflecting on change in answers before and after, students learn that engineering is also about psychology and Human Factor Engineering can be the path for those who does not like math or physics. How to do it? I usually use Google Forms, but you can choose your preferred platform With Google Forms I can put side by side reports for before and after without the need to process answers additionally (that may not be possible). Don't forget to allow access to everyone with a link without the need to sign in! Use QR code to convert the link and put it on the title slide, audience may do it while getting sited. Budget 3 minutes for 'before' and at least 10 minutes for 'after' to have time to go over the results and to provide some insights. I hope I have convinced you to use this tool. Here are sets of questions for some lectures / activities you may lead. Topic 1: AI and Trust in Generative Models (LLMs) Context: A lecture explaining how Large Language Models work (probability, tokens, training data) versus the perception of "thinking", "hallucination'", may be discuss LLM Arena. Q1 (Control):  How frequently do you use AI tools (like ChatGPT, Gemini, or Claude) in your daily work or studies? Logic:  Baseline usage. Should not change after the lecture. Heavy users may have "automation bias" compared to novices. Q2 (Psychological Variable):  I enjoying digging into some new problems / information for hours. (Rate your agreement with the sentence) Logic:  Measures the trade-off between speed and accuracy and personal preferences when dealing with new topic. This significantly impacts whether someone accepts AI's occasional "hallucinations." Q3 (Knowledge / Dunning-Kruger):  How confident are you that you understand how  an AI generates a sentence? Logic:  Before the lecture, people think they know (it "thinks"). After explaining probabilistic token prediction, confidence often drops as they realize it is just "math, not magic." Q4 (Key Metric):  I would trust an AI system to review my medical records. (Rate your agreement with the sentence) Logic:  The intervention goal. After learning about "hallucinations" and training data bias, this number usually plummets, showing a healthier, more skeptical relationship with the tool. Topic 2: Sustainable Transportation & Mode Choice Context: A lecture on urban planning, the true cost of car ownership and 'free parking', and the efficiency of mass transit. Q1 (Control):  What is the approximate distance of your daily commute (one way)? and What is your most common mode choice? (You can choose just one question) Logic:  A fixed variable. Distance is a hard constraint that influences choices regardless of preference and most common mode so far, should not change based on lecture content. Q2 (Psychological Variable):  How much do you view your personal vehicle as a symbol of your social status or personal freedom? (Other options, for younger audience, can be about level of activity during the day or about environmental impacts they leave) Logic:  This captures the emotional attachment to cars. High attachment here usually correlates with resistance to public transit, regardless of efficiency arguments. Q3 (Knowledge / Dunning-Kruger):  How familiar are you with the "Total Cost of Ownership" (insurance, depreciation, fuel, parking) of a personal car vs. public transit? (This question is heavily influenced by the content of your lecture) Logic:  Most people underestimate car costs. After the lecture breaks down the math, this self-assessment shifts. Q4 (Key Metric):  If public transit took 15 minutes longer than driving but saved me $300/month, I would switch to transit. (Rate your agreement with this statement) Logic:  The "Willingness to Pay" (or willingness to wait) metric. Does the financial data presented in the lecture outweigh the convenience/status factor?

  • Evolving Role of Mentorship

    Are you an educator? What are your thoughts on Large Language Models , and AI technology in general, with respect to education?  Do you think that soon (or very soon) you will be replaced? Have you tried NotebookLM AND guided learning from Google or OpenAI ? If not, try them and answer again. What do you think now? From books to AI New technologies have evolved human knowledge transfer before, slowly removing the need for constant and direct human-to-human connections during education. From the beginning, learning required mentorship, in a form of a direct, personal transmission of skill and wisdom from an elder to an apprentice. This fundamental model, though transformed by successive waves of innovation, has never lost its core importance. However, it can be the case that in an AI-driven educational landscape , the role of the mentor , the teacher, can be diminished . Or, so they say… Well, what if… hear me out ! What if, it will not be rendered obsolete; rather, it will become more critical and complex than ever before ? Johannes Gutenberg  invented the movable-type printing press (around year 1440). The first major shift in education came with formalized institutions like church schools, which introduced the "teacher" as a figure responsible for instructing groups. The invention of the printing press democratized knowledge further, bringing books to a wider audience. Yet, mentors remained vital. Why? Because a book can provide information, but only a mentor can ignite curiosity , tailor understanding , and guide the application of knowledge . As societies evolved, so did education. The rise of private schools for upper-caste families of all genders underscored the value of a good teacher as a social and intellectual catalyst , one who could give their students a distinct advantage. With the advent of universal public education, the teacher's role expanded dramatically again, now tasked with navigating the diverse needs and backgrounds of almost the entire populace . Not the add but unsolicited advice, read the book or watch lectures. Scratch that, do both and visit Dr.Sandel seminar whenever you can! In the modern era, we have come to understand that nearly anyone can learn (that was not obvious at first!), provided their individual needs are met. This places the mentor at the center of the learning process again. They were required to understand and apply proper tools to accommodate any learning style . The once-underappreciated roles of kindergarten and elementary school teachers are now rightfully seen as foundational, shaping not just knowledge but a child's entire disposition towards learning. All genius engineers who are building these new tools, have got their propensity to learn and innovate, more likely from their first mentors and teachers. At the highest levels, in universities now accessible to more people than ever, the lecture of a great professor remains a coveted experience (even on YouTube! Have you watched “Justice” from Michael Sandel ? Thrilling, right? Can’t stop watching!) because it is more than a recitation of facts. It is a masterfully curated and condensed synthesis of knowledge, delivered with a passion that a textbook cannot replicate. Then, the Internet threatened to change everything. With information instantly available, many questioned the need for teachers at all. If a lecture is recorded, are all professors redundant? … Probably few should stay to record more lectures, others can go, right ? The reality proved more complex. The role of the professor shifted from a "sage on the stage" to a "guide on the side," helping students navigate the ocean of information and differentiate signal from noise .  Online education tools Online learning platforms like Coursera and edX were heralded as the next revolution, yet their impact has been nuanced. Research consistently shows that while millions enroll, the average completion rates for these Massive Open Online Courses (MOOCs) often hover in the single digits, highlighting a critical missing element: the accountability, motivation, and personalized guidance that a live mentor provides. Yeah! It puzzles me too! But it seems to be true.  Now, we face AI in the form of Large Language Models (LLMs). AI enthusiasts proclaim that teachers will be among the first professions to be replaced. And the best is - more individuals will be able to get education in a more efficient way (due to AI personalization). But so far, the opposite feels true. The existence of tools that can flawlessly "cheat" a traditional assignment has not eliminated the need for educators; it has made their job infinitely more challenging and important . To truly educate a student now means guiding them through technical knowledge , at the same time instilling professional skills like communication, leadership, and teamwork while simultaneously teaching them to use AI efficiently and ethically . It requires ensuring that genuine learning is happening beneath the surface of AI-assisted output. Is it a surprise that this new reality demands a significant increase in a teacher's workload? Let me show you! Gone are the days of easily graded multiple-choice questions or long essay assignments that can now be generated in seconds and, hence, are not helpful with learning. Properly designed and meaningful assessment requires more feedback, more review, and more innovative assignment design (those that can provide learning with or without the use of LLMs). The idea of using AI to grade this new wave of assignments is a paradox, in my view, if an AI can grade it, an AI can do it , creating an inescapable loop of potential cheating. The path forward requires a fundamental redesign of our pedagogical approach. We have a lot of work ahead of us, not less…  Before you leave, answer these two questions: Question 1: Who is more likely (Human educator or LLM Chatbot) to admit that they don't know the answer or 'not sure'? As an educator, I knew what I would do, but I wasn't sure about the perspective of an LLM chatbot. I decided to ask the most advanced models available at the moment. Here is the answer from Gemini 2.5 Flash: and ChatGPT (without subscription, model that was used - ChatGPT 5) So, here is your answer! Humans can be embarrassed by not knowing something and may try to hide it when maintaining authority is important, while LLMs will be honest. Always. Question 2: When you are making a mistake, who will help you by providing a correct answer with a supporting explanation, and who will help you discover the correct answer yourself? Guided Learning Option at Gemini I used Gemini guided learning to solve multiple different problems from my class Transportation Systems Analysis. One you can find in the tools. If you want supporting guidance while you are practicing your homework, you need to select that option. While Gemini was very helpful, it was more similar to following a practice problem in the book or a professor solving a problem in the class than guided learning. Comments after first step (without mistakes) Gemini response after the second solution step where I made a mistake Comments after final steps, again with mistake in my calculations This looks great! But it would not be a way how I guide the student. There is no room for critical thinking for the student, no discovery. There is no way to stop students from using these tools to solve homework assignments, and we should not! Whenever they join the workforce, these tools will be available for them and, more likely, they will be required to be used by the employer. So, we should teach students how to use them, efficiently and ethically . But at the same time, we still need to teach students how to understand the core principles of specific discipline, and that requires from us the development of new and innovative teaching approaches. So, educators and researchers in the education, time to roll up our sleeves. We indeed have a lot of work to do!

  • Needed Human-Centric Transportation Research In Light of Technological Advances and Industry Messaging

    The current technological landscape, marked by advances in AI (especially generative and multimodal models), final arrival of sophisticated wearable devices (remember Google Glasses ?), and an industry focus towards human-centric goals, presents a unique opportunity to redefine transportation.  Traditionally, transportation research has prioritized efficiency, but this new era allows for a focus on human well-being, inclusivity, safety, and personal empowerment.  MLLM illustrated by Gemini At the heart of this transformation are Multimodal Large Language Models (MLLMs), which can process and reason across text, audio, and visual inputs in real time. Concurrently, wearable technology has evolved into proactive health and wellness hubs with advanced biometric sensors that can infer a user's physiological and affective state. Leading AI companies like OpenAI, Meta, and Google are guiding this application centered on human benefit and, hopefully, responsible innovation. Wearable Technology in 2025 and Beyond Wearables are transforming from data trackers to intelligent companions, capable of creating "AI-Generated Health Twins" and acting as "Conversational and Actionable Micro-Coaches."  This is driven by advanced chipsets and new biometric sensors that monitor blood oxygen, skin temperature, blood pressure, heart rate, heart rate variability (HRV) and critically, physiological proxies for mental and emotional states like electrodermal activity (EDA).  Meta Ray-Ban AI Glasses Smart glasses enable discreet data collection and offer augmented reality interfaces. The convergence of physiological sensing (widely adopted by population such as Fitbit, AmazFit, etc.), location data (collected by smartphones for years), and different health and habits related data (collected using such apps as Apple HEalth, Fitbod, Google Fit, Fooducate, and many many others) creates an opportunity to build " Individual psycho-geographic " layer, that maps emotional and behavioral impact of physical spaces, allowing for development of personalized routing around "stress hotspots". Industry Guiding Principles for an AI-Driven Future Leading AI companies have articulated missions emphasizing responsibility, safety, and human benefit. OpenAI focuses on AGI benefiting all humanity , Meta on human connection and personal AI , and Google on responsible innovation and universal accessibility . Let’s take them at their word and discuss opportunities for transportation research!  The Context-Aware Agent for Personalized and Inclusive Navigation Digital phenotyping for mobility, created by Gemini A truly personalized navigation agent would fuse three real-time data streams: external environmental context (crowd density, noise levels), a learned user profile (stable preferences like avoiding tunnels), and real-time internal state (physiological stress from wearable like EDA and HRV). This allows the agent to dynamically reroute based on a user's rising stress levels. This approach reframes the navigation agent as a "digital phenotyping for mobility" tool, offering objective insights into environmental triggers for clinicians (for research purposes ethical questions about data privacy and potential misuse are ignored). Future research should focus on co-design with neurodivergent individuals to ensure empowerment , not paternalism , and explore mechanisms for user feedback . Systematic sensory data collection and mapping of urban environments are needed to create "sensory maps" that inform the agent's routing decisions. Rigorous studies are required to evaluate the real-world impact of these agents on anxiety reduction and independence, using both objective physiological measures and qualitative methods. Therapeutic Applications in Immersive Environments Sensory cocoon, created with Gemini Public transportation often causes sensory overload and anxiety due to unpredictable stimuli. Research needs to explore creating a "sensory cocoon" using immersive technologies to mediate sensory input. The audio component would use AI-powered headphones for intelligent, selective sound filtering (dampening jarring noises, preserving announcements) while overlaying calming audio. The visual component, delivered via AR smart glasses, would subtly augment the real world by dynamically dimming cluttered environments or blurring faces to reduce cognitive load and social pressure, while also highlighting clear paths or displaying calming visual patterns. Unlike Virtual Reality Exposure Therapy  (VRET), which exposes users to simulated anxiety triggers, the "immersive sanctuary" shields users from overwhelming real-world stimuli for in-the-moment coping. These approaches are complementary: VRET builds long-term resilience clinically, while the "sensory cocoon" is an assistive digital therapeutic for daily life, potentially bridging clinical treatment to independent functioning. Future research must quantify the physiological impact of sensory modulation on anxiety using laboratory and field studies. Critically, it must address the immersion-awareness trade-off, ensuring therapeutic benefits don't compromise situational awareness, and safety. Finally, research should explore personalization and user control, investigating interfaces from fully automated to granular manual control to ensure effectiveness and empowerment for diverse users. Navigating Social Anxiety Navigating Social Anxiety, created with Gemini Urban environments, with their high density and constant social interaction, can be debilitating for individuals with Social Anxiety Disorder  (SAD). AR offers a powerful in-situ intervention, overcoming the generalization limitations of VR exposure therapy. An AR application via smart glasses can act as a "social-cognitive prosthetic" and in-situ coach that can help to reduce cognitive and emotional burden by highlighting less congested paths, displaying pre-scripted conversation starters, or even analyzing facial expressions of conversation partners to provide private feedback on social cues. I am sure many other assistance are needed that I can’t imagine. To discover them, broad focus group discussions are needed with domain experts and individuals with SAD.  Focused research is required to establish clinical efficacy through randomized controlled trials comparing AR interventions to gold-standard treatments. Usability and social acceptability are paramount, requiring research into discreet hardware design and subtle interfaces that don't draw unwanted attention. Finally, deep engagement with ethical implications is crucial, examining potential bias in emotion recognition algorithms and the risk of over-reliance on technology, aligning with principles of responsible, human-centered innovation. These research areas collectively aim to transform our cities into a more humane, responsive, and supportive ecosystem for individual well-being. This is now feasible due to the convergence of multimodal AI, on-device processing, and continuous sensory data from wearables. Ethical frameworks from AI industry leaders guide this potential, compelling researchers to prioritize usability, trust, equity, and the long-term psychological and social impacts.  The path forward demands human-subject research, co-design with vulnerable communities, rigorous evaluation, and critical examination of personalizing reality, ultimately shifting walking, cycling, transit riding and any there traveling from a metric of efficiency to a measure of human well-being and happiness.

  • Use of LLM Arena for easy to medium level engineering problems

    The swift and widespread integration of Large Language Models into virtually every professional sector has created an immediate and critical need for their inclusion within university curricula. We need to incorporate them yesterday.  Universities have an obligation to guide students on their path of ethically navigating this new technology, teaching them to critically evaluate AI-generated content, understand its inherent biases, and leverage it as a powerful tool for augmenting human intellect rather than replacing it, and all of that, somehow, without sacrificing the major task: learning critical thinking and problem solving skills in specific area. When the most advanced LLMs are able to solve almost all engineering problems, how can we achieve that ?  Let’s share ideas! Idea: Use of LLM Arena for easy to medium level engineering problems. LLM Arena is a crowdsourced online platform where users anonymously vote for the better of two side-by-side large language model responses to the same prompt, generating a continuous leaderboard that ranks models based on human preference. https://lmarena.ai/ I am teaching transportation engineering for juniors, my example will be from that course.  Problem from the text book: A four-timing-stage traffic signal has critical lane group flow ratios of 0.225, 0.175, 0.200 and 0.150. If the lost time per timing stage is 5 seconds and a critical intersection v/c of 0.85 is desired, calculate the minimum cycle length and the timing stage effective green times such that the lane group v/c ratios are equalized. Revised problem for individual homework : Using  LLM Arena ( https://lmarena.ai/ ) solve this problem 2 times (you will get 4 different solutions) with random models.   “A four-timing-stage traffic signal has critical lane group flow ratios of 0.225, 0.175, 0.200 and 0.150. If the lost time per timing stage is 5 seconds and a critical intersection v/c of 0.85 is desired, calculate the minimum cycle length and the timing stage effective green times such that the lane group v/c ratios are equalized.” Note: you may need to copy this problem to text editor before copying into arena text field. For every ‘battle’ provide: Screenshots of obtained answers before and after your judgement (first screenshot will have Assistant A/B instead of model names and second will have model names. Report what models participated and what was your judgement. Justification for your judgement with at least one reason.  For every solution, assume that you are a teacher who is grading students’ work and provide: A solution screenshot with the name of the model.  A grade using scale 0-10, where 0: no solution provided, and 10: perfect solution. Make sure that your grading is fair and consistent between ‘students’.  If your grade is less than 10, explain all mistakes made by the ‘student’. The feedback to the ‘student’. This is how LLM Arena interface looks like as of July 2025: LLM Arena Interface (July 2025) Why an easy to medium level problem?  As of July 2025, most of these problems CAN be solved by advanced LLMs without any additional materials. However, difficult problems that require merging together multiple concepts may not be solvable yet with a zero-shot approach.  Why solving it at least twice?  Some 'junior' models may not be able to solve even easy problems, see picture below: LLM results before user judgement (no names of the models). Right screen - no solution is provided, left screen - correct solution is provided. You can ask students to run another battle if this happens, or ask them to solve 3-4 times to get different outputs, that include no solutions, wrong solutions , and correct solutions .  In the picture below, left solution is wrong, while right solution is correct.  LLM Arena results after user finished judgement (left screen is highlighted with green, names of the models are visible). Right screen - wrong solution is provided, left screen - correct solution is provided. Example of the problem considered here is an easy problem that every student can solve correctly, usually from the first attempt.  If more advanced LLMs are used (Gemini, ChatGPT, etc.), more likely students will get the correct answer from the first attempt, limiting the intensives to look into solution at all. However, when using LLM Arena, we have got: One no solution , One wrong solution , Two correct solutions . This allows for more engaging activity for the student. Why ask to grade and to provide feedback? The grading process with discussion on errors force students to look deep into the solutions (however easy the problem is) multiple times, providing necessary training.  Feedback writing trains the ability to  communicate engineering problems.  Another bonus, there is no need to generate different numbers to avoid ‘cheating’ as LLMs will provide necessary randomness to guarantee that students will be working with somewhat different problems. Sounds like win-win-win! Right? Final thoughts: Is this feasible for large classes? The only issue is how to grade such HW for 150 students? We do need new tools in BrightSpace.  Before LLM, this assignment was a self-graded calculation problem with randomly generated numbers and unified feedback with easy solution steps. However, if a problem is formulated in the way proposed here, the grading becomes a time-consuming task that require further discussion. Especially in a light of intensive discussion on how teachers are the first to be replaced by the AI. What do you think?

  • If you embrace AI in your teaching…

    As a professor from the engineering department, I feel that we may have an easier time adapting to the rise of generative AI compared to our colleagues in  other areas. This is because we have faced similar paradigm shifts before in previous generations, navigating disruptions that fundamentally changed how we learn and how we work. Calculators came first... We saw it first with the introduction of calculators, which were initially feared as a crutch that would atrophy our mental math skills. Computers came second... Then came computers that could solve quadratic equations, and later complex differential equations, seemingly automating core parts of our curriculum. Despite these advancements, engineering education survived and even became more demanding, pushing us to solve more complex and abstract problems in class. This is because these tools also changed how we work. As will the generative AI. And we will survive it too! With that confidence, I have chosen to fully embrace it in my teaching, reshaping my syllabus not as a defense against AI, but as a roadmap for partnering with it. Below I have listed changes (additions) to all my courses syllabus that I made to accommodate AI tools in my teaching. I have implemented three changes. I have added a new student learning objective. Student Learning Objectives Students will learn and practice ethical and efficient use of Artificial Intelligence (Generative AI) I have added a clause explaining student use of AI tools. Students’ Use of Artificial Intelligence: In this course, you are encouraged to use generative AI tools (ChatGPT, Gemini, etc.) as partners in your learning process. Think of them as powerful assistants that can help you brainstorm, outline, find information, and refine your writing. However, they are tools, not replacements for your own critical thinking. You are the author and ultimately responsible for submitted work. The goal is to use AI to elevate your work, not to generate it for you. · Unless an assignment explicitly prohibits it, you may use AI tools for: (i) Brainstorming and exploring topics, (ii) Developing outlines for your papers and projects, (iii) Checking your grammar and style, (iv) Summarizing complex texts to aid your understanding, (v) Explaining concepts in different ways. · For any assignment where you use an AI tool, you must include a brief "AI Use Declaration" at the end of your submission. This declaration should be a short paragraph that transparently describes: Which AI tool(s) you used (e.g., " I used Gemini... "), How you used it (e.g., "...to brainstorm initial ideas for my essay topic and to help create a first draft of my outline. "), A brief statement on how you reviewed and edited the output (e.g., " I then wrote the essay myself, fact-checked all information, and edited the entire document for originality and clarity. "). This declaration is part of your assignment grade, and it aims to foster transparency and to reflect on your learning process. · If you include a direct, unmodified quote from an AI model (for example, to analyze its response), you must cite it clearly in-text and in your bibliography, just as you would any other source. Example Citation: "Describe the symbolism of the donuts shape in the movie Everything Everywhere All at Once." Prompt to Gemini, Google, 6 July 2025. · Presenting AI-generated text or ideas as your own original work without proper declaration or citation is considered as plagiarism. You are responsible for the accuracy, originality, and critical analysis of everything you submit. The final work should reflect your unique voice and understanding of the material. · Be aware that individual assignments may have specific rules regarding AI use that supersede this general policy. Always read assignment instructions carefully. Failure to adhere to this policy will be considered a violation of the university's academic integrity code and may result in penalties, including failure of the assignment or the course. Finally, I have disclosed my use of the AI to students using this clause.  Instructor's Use of Artificial Intelligence: In the spirit of transparency and to model the ethical partnership with technology that this course encourages, I want to be open about my own use of artificial intelligence (AI) in preparing and administering this course. I may use AI tools as a productivity assistant to help with tasks such as: · Generating initial ideas for lecture topics, discussion questions, and in-class activities. · Creating first drafts for presentation slides, assignment prompts, illustrative case studies. · Checking documents for clarity, grammar, and style. · Creating custom images or diagrams to visually explain complex concepts in my lectures. It is critical for you to understand that AI serves as my assistant, not my replacement. Every piece of AI-generated content is thoroughly reviewed, fact-checked, edited, and approved by me to ensure it is accurate, relevant, represents my opinions, and meets the pedagogical goals of our course.  The final judgment, core intellectual content, and, most importantly, all grading and feedback on your work are and will always be my own. In addition, here is a Google document  with many examples provided by different professors from diverse disciplines.

  • Using LLM for grading

    In academia, using AI for grading and providing a feedback seems like a reasonable next step. I suspect that AI will manage to grade and provide valid feedback for easy to medium level of difficulty calculation problems. If it can solve it, it can grade it, right? What if the problem is not a calculation assignment? We need to test it!  Note, the privacy of my students is a major concern for me (even with the  turn off button  that instructs LLM not to save the information; as long as something is online, I have low trust that data is not collected). For the purpose of this experiment, I have replaced names of all students.  What activity have I used for LLM to grade? One of the activities that students are doing (juniors, enrollment 100-150, course 'Transportation Systems Analysis') is Weekly Group Activities (5 students per team), where students are asked to provide a responses to 5 questions (some are debating, some are knowledge based). Each team provides one page report graded on scale 0-5. In addition, it is required to hold discussions online to enable participation grading. Here is a section from the syllabus/assignment regarding participation grading: *** Chat discussion, graded individually using 0-3 scale, where i. 0 is assigned to students who have not participated; ii. 0.5 per question for student who provided minor contribution (helped editing the question, provided short comment, and/or expression of agreement/disagreement with others ideas); iii. 1 per question for students who provided major contribution (participated in debates for that question, provided justification to their agreement / disagreement toward others ideas) iv. Note, to get full contribution points a student needs to provide major contributions for at least 2 questions and minor contributions for two other questions. *** What do we want to learn? I am curious is LLM is capable to provide reasonable grading and feedback for the chat discussions. This year my enrollment was 103, so each week I had 21 reports and 21 chats. Every week, assigned TA is tasked to look through all chats to capture the participation. It is time consuming, and somewhat subjective. Although, TA did a great job with no complains throughout the semester. In this experiment, I wanted to see: will LLM model be able to do this grading consistently between chats/groups; how LLM grading is comparable with our grading; and how different models perform. My observations I will give an example for one team discussion graded by different models (DeepSeek and free Gemini model) later. Before that, let me provide you with some of my observations from the experience: models more or less demonstrated good performance with extreme cases: when students didn't or barely participated. different models often have similar or the same grade, but their grades were different from one that was given by the TA. It was so common for many groups that I started checking myself if grades from the TA are fair. My investigations supported TA's decisions. DeepSeek demonstrated initiative and often graded student participation as 3.5-4.5 out of 3 with comment ‘ Exceeding Expectations ’. That was funny and surprising! Grading policy and my prompt instructions stated clearly that grade should be between 0 and 3 . However, because there are directions on how to grade contribution to each question, if student participated in more than requested, DeepSeek violated 0-3 scale. Other models I tried followed 0-3 constraint, and stop at 3 even if the student contributed to more questions. It was interesting to see how models prioritized different conditions if they contradict each other. Results of my investigations for differences between LLM and TA’s assessments: When students had little discussions on some question, just 1-2 messages but to the point and at the level of major contribution, we gave them full credits; however, models awarded minor contribution ( 0.5 ) and on rare occasions even no contribution with 0 points. The opposite was also true, when student had a lot of discussions with somewhat  relevant text but no substance (discussing wording and formatting for the report, asking clarifications from others, confirming that others has solved the question) we gave no contribution ( 0 ) or minor contribution ( 0.5 ) , but models awarded more, qualifying that discussion as major contribution . This means that models couldn’t ‘grasp’ the meaning of the discussion just counted the number of statements of relevant discussions. So, as of now, TA are only valid grader for this type of the assignments. Models are very close to being able to do it, but not quite yet. Example Here is an example of the analysis from 2 different models, for this team we gave 3 out of 3 to every student , however, models have big disagreements with us for this team demonstrating some of my observations above (I provided detailed report only for one student to make this post readable). Model 1 Student Analysis and Grading: Student P: A.1: Did not contribute. A.2: Provided major contribution by selecting and justifying vehicle attributes. A.3: Provided minor contribution by acknowledging AI's use in spellcheck and understanding. A.4: Did not contribute. A.5: Did not contribute. Summary: Paul contributed to the discussion, mainly focusing on Q2, and provided some general feedback and edits to the final document. Grade: 1.5 (1 for A.2, 0.5 for A.3) Grade Summary: In summary, students Y, A, T, and E all received a 3, and P received a 1.5. Model 2 Here’s the summary and grading  of each student’s contributions based on the provided chat transcript and the evaluation metric. I’ve analyzed their participation in the discussion for each question (A.1 to A.5) and assigned scores accordingly. Student P A.1 : No contribution. A.2 : Provided a major contribution  (1 point) by agreeing with A’s suggestion for modifying vehicle attributes (tau) and proposing emissionClass and sigma as additional attributes. A.3 : No contribution. A.4 : No contribution. A.5 : No contribution. Total Score : 1/3 Summary : P contributed moderately to A.2 but did not address A.1, A.3, A.4, or A.5. His participation was limited and needs improvement. *** Final Grades A : 4.5/3  (Exceeds expectations) Y : 3.5/3  (Exceeds expectations) T : 3/3  (Meets expectations) E : 2.5/3  (Meets expectations but could improve) P : 1/3  (Needs improvement) You can guess what model was DeepSeek. Concluding Notes Note, that Model 1 for student P captured some contribution for A3 but named it minor , Second Model has not caught that at all. We judged both contributions as major . Both models missed minor contributions for that student when they just clarified something in one-two sentences. This experiment and report was created for Lumen Circle activity during Spring 2025 semester. When you are reading this, this experience will not be replicable due to rapid advancements of LLMs. Hopefully, it will be much better! …and safer. If you want to discuss more, feel free to reach out or to join Lumen Circle. I do recommend!

  • Re-engineering the Research Paper for the AI Era

    The academic engineering paper, a cornerstone of scientific discourse, is in the midst of a profound, albeit quiet, identity crisis. Its primary author is increasingly an AI, and its most diligent reader is also an AI. Large Language Models (LLMs) now assist in drafting everything from literature reviews to methodology sections, while other algorithms parse, summarize, and categorize these papers for databases and discovery tools. In this new ecosystem, the traditional paper format, designed for human contemplation, has become an inefficient bottleneck. We are, in essence, forcing AIs to write verbose prose for other AIs to then painstakingly deconstruct. The time has come to redesign the research paper to be what it is now becoming: a machine-readable, data-first document. Should you enjoy the writing? (Created with Gemini) With that, the current model seems extremely wasteful, especially when environmental costs of training and using LLMs come to mind. It encourages "algorithmic bloat", long-winded introductions, meandering literature reviews, and detailed descriptions of standard procedures, that serves neither human nor machine. A human researcher skims for the core contribution, while an AI reader must sift through semantic filler to be able to extract the same key information. This existing model inadvertently created (already) a culture of "citation race," where publications resemble a self-referential game of citation counts rather than a genuine pursuit of knowledge. Researchers often cite their own previous work or tangentially related papers, inflating bibliography length without adding substantive context. This practice, combined with the publication of "minimal publishable units" often designed to train students or secure grants, clutters the academic record with articles that offer little novel insight. Furthermore, the traditional prose-heavy structure, coupled with a lack of accessible data and the inability to readily verify cited sources (often due to paywalls), makes it exceedingly difficult for human reviewers to detect logical fallacies or unsubstantiated claims. Here and now, we have a choice: either continue following these inefficient procedures to report research progress, or rethink that process, for example by looking into the conciseness of medical research, that prioritizes clarity, efficiency, and, as a result, machine-parsability.  A. The Introduction is the Contribution   The lengthy background and historical context should be eliminated unless they are foundational to the paper’s novel claim. The paper should begin immediately with a clear, structured statement of its contributions. The "why" should be self-evident from the problem being solved. S. Context via Structure, Not Prose.   The sprawling literature review is obsolete. It should be replaced by a structured table of prior work and existing gaps. This table can list key preceding works, their methods, their limitations, and precisely how the current paper differs or improves upon them. This is more direct for a human reader and immediately parsable for an AI. We all create such tables while doing the research, but when we start writing the paper, we revert to prose. H. Methodology as a Pointer   Describing a well-established process in detail is redundant. If the methodology is not a core contribution, it should be reduced to a methodology names and citations. For example: Methodology: We employed a standard Adam algorithm (Diederik P. Kingma, Jimmy Lei Ba, 2014) with and standard decay rates  (β1​=0.9, β2​=0.999) and a learning rate of 1×10−4. I. Data and Code as First-Class Citizens   To ensure true transparency and reproducibility, the paper must treat its data and code as core components, not as optional supplements. All datasets, models, and analysis scripts must be made available in persistent, publicly accessible repositories (e.g., Zenodo, Figshare, GitHub). These must be linked directly within the paper. This is a non-negotiable component. A claim without accessible data is an unsubstantiated assertion. This allows any reader, human or AI, to immediately access and interrogate the evidence, transforming the paper from a static report into a dynamic, verifiable research object. F. Explicit, Parsable Contribution Sections   The paper's core innovations must be explicitly declared in a dedicated, machine-readable section using simple tags or a structured data block (e.g., JSON-LD): We introduce a new benchmark dataset of 5,000 annotated synthetic images of brain scans. We propose a novel behavioral mechanism, "Cognitive Restructuring," that reduces driving anxiety by 36%. T. Short, Data-Driven Results and Conclusions   The results section should be a direct presentation of data: figures, tables, and key metrics. The conclusion should be a bulleted list summarizing the findings and their implications, mirroring the claims made in the contribution section. The Role of the Human Reviewer in the AI Era This shift logically raises a critical question: How can such a condensed, hyper-specialized paper be reviewed? The answer is not to eliminate human oversight, but to empower it. Human review remains the most critical defense against fraud, ethical abuse, and the injection of pseudoscience into the scientific record. Citation or not citation?  (Created with Gemini) In this new model, the reviewer’s role becomes that of an expert arbiter of integrity and logic, facilitated by AI. Upon receiving a paper, the reviewer gains access to a suite of tools. One tool, using the paper’s structured citations, generates a traditional, narrative-style introduction and literature review on demand,  providing the necessary context for reviewers who are not directly involved in the area explored by the authors. Crucially, with mandatory data and code linking, the reviewer can also deploy automated analytical and replication tools.  These tools can run directly on the provided dataset to independently replicate key figures, perform standardized statistical checks, and flag anomalies or inconsistencies in the data. This empowers the reviewer to move beyond trusting the author’s presentation to actively verifying the findings. Freed from parsing prose and empowered with analytical tools, the human reviewer can focus on higher-order tasks: Verifying Integrity -  Do the results presented logically follow from the methodology? Do the findings from the replication tools match the author's claims? Assessing Soundness -   Is there any evidence of data manipulation, p-hacking, or other forms of academic dishonesty flagged by the analysis? Evaluating Significance -   Does the contribution, now clearly isolated and verified, represent a meaningful advance over the prior art? Ethical Oversight -  Are there any foreseeable ethical implications or potential misuses of the technology that need to be addressed? This "human-in-the-loop" model preserves rigorous, critical oversight while adapting to the realities of an AI-driven research landscape. Important, described approach is not something new. This is how research papers are structured and reviewed in some discipline, and, how they were envisioned to be. With rapid AI proliferation in our life, we have a chance to fix current bugs and level up the process all together. Will we? The first question we must ask ourselves is: Is current system of writing based on tradition is corrupted to the degree that it is impeding new discovery? The second question is: Is AI ruining academic writing as in its traditional form? ?Have you answered YES to both questions...

  • Empowering the Reviewer: Essential AI Tools for the New Academic Landscape

    The peer review process is already a significant, often uncompensated, service to the scientific community, demanding deep expertise and considerable time, which makes performing it both quickly and accurately a formidable challenge. Surprisingly, AI proliferation in academic writing doesn't make it easier, while rapidly ruining academic writing. Will academic writing be reborn? We will see! Let’s assume it will.  The possible shift towards concise, data-driven research papers will necessitate a parallel evolution in the reviewer's toolkit. To effectively assess the integrity and validity of AI-influenced manuscripts, reviewers will increasingly rely on AI-powered tools themselves.  Here are three crucial categories of such tools: 1. Contextualization and Literature Synthesis Assistants Given the possible absence of lengthy introductions and literature reviews in the future paper format, reviewers will benefit immensely from AI tools that can rapidly generate context. These tools would take the paper's structured citation table and, on demand, synthesize a concise yet comprehensive overview of the relevant prior work and the research gap the paper aims to fill. Functionality would include: Analyzing the provided citations to identify key themes, influential papers, and the evolutionary trajectory of the research area. Highlighting how the cited works relate to each other and to the current paper's stated contributions. Explicitly outlining the novel aspects of the submitted work in relation to the existing body of knowledge. This allows reviewers, even those not deeply embedded in the paper's specific niche, to quickly grasp the significance and novelty of the research without relying solely on the authors' potentially biased framing and provide additional services to check the rationality of all citation inclusion efficiently. The last one is almost impossible to do at present.  2. Data Integrity and Reproducibility Verification Platforms With the mandatory inclusion of data and code links, AI-powered platforms will be essential for reviewers to independently verify the presented findings. These tools would offer functionalities such as: Running the provided scripts on the linked datasets to reproduce the key results (figures, tables, metrics) presented in the paper. Identifying potential issues in the data or analysis, such as unusual distributions, outliers, or signs of p-hacking. Analyzing the code for adherence to best practices and identifying potential flaws in the experimental design or implementation. By automating these checks, reviewers can dedicate their expertise to interpreting the results and assessing their broader implications, rather than spending time on manual verification. 3. Contribution Validation and Claim Consistency Analyzers With the emphasis on clearly defined contribution sections, AI tools can assist reviewers in evaluating the validity and consistency of these claims throughout the paper. Functionality would include: Identifying all explicit and implicit claims made in the contribution, results, and conclusion sections. Ensuring that each claim is directly and adequately supported by the presented data and analysis. Identifying any contradictions or logical fallacies within the paper's arguments and between its different sections. These tools will help reviewers ensure that the concise paper delivers on its promises and that the stated contributions are genuinely substantiated by the research presented. Embracing these AI-powered tools will be crucial for maintaining rigor and trust in the evolving landscape of academic publishing. The question is: What did I miss?

  • Reflections on My Internship Experience by Gongda Yu

    Choosing to undertake an internship during summer 2024 as a Ph.D. student was a decision that filled these summer three months with unexpected joy and growth. Stepping into a role that I had never experienced before, surrounded by a team of like-minded, passionate individuals, I found myself on a journey of self-discovery. This experience has not only helped me uncover what I truly love to do but also revealed my strengths in ways I hadn't anticipated. My manager and Ph.D. advisor provided invaluable guidance throughout this journey, shaping my path and encouraging me to explore my potential. This internship opened my eyes to the gap between the academic knowledge gained in the classroom and the practical challenges faced in the real world. Working on actual projects, I had the chance to interact directly with clients — an experience that taught me the art of communication. I learned how to express our team’s needs and capabilities with clarity and tact and to respond to clients' questions thoughtfully. This process pushed me to grow, not just as an engineer, but as a communicator and a professional who understands the importance of empathy and patience in every interaction. One of the most profound experiences was visiting project sites. Standing there, witnessing DOT projects and construction in action, I could feel the tangible impact of our work. The concepts I once studied in textbooks were now a reality before my eyes. It was a deeply meaningful experience to see theory transformed into practice, and it reminded me why I chose this field in the first place. As my internship drew to a close, writing my farewell emails to everyone I had worked with was unexpectedly emotional. Each goodbye felt like a moment of reflection at a crossroads in life—a realization that our paths might not cross again soon, if ever. It was a poignant moment, realizing that sometimes you have to silently wish people well and hope they have a good life. This feeling of closure, mixed with uncertainty about the future, was both humbling and profound. Throughout the internship, I took the opportunity to schedule numerous coffee chats with colleagues from various departments. I encourage everyone to do so, either during the internship or during your first month at any job. These conversations were incredibly enriching. Each person shared their stories, insights, and career advice, which became precious nuggets of wisdom for me. I came to understand that what I thought I was meant to do, based on my academic background, wasn't necessarily what would make me happiest or most fulfilled. The true lesson I learned is that our work should resonate with our passions, something we’re eager to pour our time and energy into — not just a task we endure until the end of the day. For me, life has always been about embracing new experiences and taking on different challenges. This belief has led me to try various jobs and live in different countries, and I feel this is where the true essence of life lies. This internship has reaffirmed that belief, showing me that life is about exploring, learning, and discovering what truly makes us come alive. In these three months, I have experienced so much more than I anticipated. I have grown professionally and personally, gaining a deeper understanding of who I am and what I want to pursue. This experience has taught me that the right path is not always the one that seems most obvious or logical, but the one that feels most true to who we are. As I move forward, I carry these lessons with me, grateful for every moment of this incredible journey.   Best Gongda

bottom of page