Выбор редакции
30 ноября 2018, 11:19

[Из песочницы] Musical box and rotary encoder on FPGA board

Introduction We are the first year students studying Computer Science in Innopolis University and we would like to share our experience in developing a Verilog program to create the coolest (well, at least, the loudest) rotary encoder ever on an FPGA board. In this article, you will find a wonderful story about our project, the hardware, software we used and some background theory regarding rotary encoder and creating sounds in FPGA’s buzzer. Finally, we will provide a link to a github repository where a reader can access the source code. We hope you will like the project and it will inspire you to make something similar. So, let’s start! Hardware and Software Читать дальше →

29 ноября 2018, 20:40

ANTISOCIAL MEDIA: Fixing Facebook: Zuckerberg falls short of his New Year’s goal. Over the year, …

ANTISOCIAL MEDIA: Fixing Facebook: Zuckerberg falls short of his New Year’s goal. Over the year, Zuckerberg published a series of notes outlining what Facebook’s done to combat election meddling, as well as hate speech, misinformation and other offensive content. The social network pulled down more than 1.5 billion fake accounts, launched a database of political […]

28 ноября 2018, 14:00

Why We Need to Audit Algorithms

Thoth_Adan/Getty Images Algorithmic decision-making and artificial intelligence (AI) hold enormous potential and are likely to be economic blockbusters, but we worry that the hype has led many people to overlook the serious problems of introducing algorithms into business and society. Indeed, we see many succumbing to what Microsoft’s Kate Crawford calls “data fundamentalism” — the notion that massive datasets are repositories that yield reliable and objective truths, if only we can extract them using machine learning tools. A more nuanced view is needed. It is by now abundantly clear that, left unchecked, AI algorithms embedded in digital and social technologies can encode societal biases, accelerate the spread of rumors and disinformation, amplify echo chambers of public opinion, hijack our attention, and even impair our mental wellbeing. Ensuring that societal values are reflected in algorithms and AI technologies will require no less creativity, hard work, and innovation than developing the AI technologies themselves. We have a proposal for a good place to start: auditing. Companies have long been required to issue audited financial statements for the benefit of financial markets and other stakeholders. That’s because — like algorithms — companies’ internal operations appear as “black boxes” to those on the outside. This gives managers an informational advantage over the investing public which could be abused by unethical actors. Requiring managers to report periodically on their operations provides a check on that advantage. To bolster the trustworthiness of these reports, independent auditors are hired to provide reasonable assurance that the reports coming from the “black box” are free of material misstatement. Should we not subject societally impactful “black box” algorithms to comparable scrutiny? Indeed, some forward thinking regulators are beginning to explore this possibility. For example, the EU’s General Data Protection Regulation (GDPR) requires that organizations be able to explain their algorithmic decisions. The city of New York recently assembled a task force to study possible biases in algorithmic decision systems. It is reasonable to anticipate that emerging regulations might be met with market pull for services involving algorithmic accountability. So what might an algorithm auditing discipline look like? First, it should adopt a holistic perspective. Computer science and machine learning methods will be necessary, but likely not sufficient foundations for an algorithm auditing discipline. Strategic thinking, contextually informed professional judgment, communication, and the scientific method are also required. As a result, algorithm auditing must be interdisciplinary in order for it to succeed. It should integrate professional skepticism with social science methodology and concepts from such fields as psychology, behavioral economics, human-centered design, and ethics. A social scientist asks not only, “How do I optimally model and use the patterns in this data?” but further asks, “Is this sample of data suitably representative of the underlying reality?” An ethicist might go further to ask a question such as: “Is the distribution based on today’s reality the appropriate one to use?” Suppose for example that today’s distribution of successful upper-level employees in an organization is disproportionately male. Naively training a hiring algorithm on data representing this population might exacerbate, rather than ameliorate, the problem. An auditor should ask other questions, too: Is the algorithm suitably transparent to end-users? Is it likely to be used in a socially acceptable way? Might it produce undesirable psychological effects or inadvertently exploit natural human frailties? Is the algorithm being used for a deceptive purpose? Is there evidence of internal bias or incompetence in its design?  Is it adequately reporting how it arrives at its recommendations and indicating its level of confidence? Even if thoughtfully performed, algorithm auditing will still raise difficult questions that only society — through their elected representatives and regulators — can answer. For instance, take the example of ProPublica’s investigation into an algorithm used to decide whether a person charged with a crime should be released from jail prior to their trial. The ProPublica journalists found that the blacks who did not go on to reoffend were assigned medium or high risk scores more often than whites who did not go on to reoffend. Intuitively, the different false positive rates suggest a clear-cut case of algorithmic racial bias. But it turned out that the algorithm actually did satisfy another important conception of “fairness”: a high score means approximately the same probability of reoffending, regardless of race.  Subsequent academic research established that it is generally impossible to simultaneously satisfy both fairness criteria.  As this episode illustrates, journalists and activists play a vital role in informing academics, citizens, and policymakers as they systematically investigate such tradeoffs and evaluate what “fairness” means in specific scenarios. But algorithm auditing should be kept distinct from these (essential) activities. As this episode illustrates, journalists and activists play a vital role in informing academics, citizens, and policymakers as they investigate and evaluate such tradeoffs. But algorithm auditing should be kept distinct from these (essential) activities. Indeed, the auditor’s task should be the more routine one of ensuring that AI systems conform to the conventions deliberated and established at the societal and governmental level. For this reason, algorithm auditing should ultimately become the purview of a learned (data science) profession with proper credentialing, standards of practice, disciplinary procedures, ties to academia, continuing education, and training in ethics, regulation, and professionalism. Economically independent bodies could be formed to deliberate and issue standards of design, reporting and conduct. Such a scientifically grounded and ethically informed approach to algorithm auditing is an important part of the broader challenge of establishing reliable systems of AI governance, auditing, risk management, and control. As AI moves from research environments to real-world decision environments, it goes from being a computer science challenge to becoming a business and societal challenge as well. Decades ago, adopting systems of governance and auditing helped ensure that businesses broadly reflected societal values. Let’s try replicate this success for AI.

28 марта 2018, 20:33

[Перевод] Алан Кей и Марвин Мински: Computer Science уже имеет «грамматику». Нужна «литература»

Первый слева — Марвин Мински, второй слева — Алан Кей, потом Джон Перри Барлоу и Глория Мински. Вопрос: Как бы вы интерпретировали идею Марвина Мински о том, что «Computer Science уже имеет грамматику. Что ей нужно так это литература.»? Алан Кей: Самый интересный аспект записи блога Кена (включая комментарии) заключается в том, что нигде нельзя найти историческое упоминание этой идеи. Фактически, более 50 лет назад в 60-х годах по этому поводу было много разговоров и, насколько я помню, несколько статей. Я впервые услышал об этой идее от Боба Бартона, в 1967 году в магистратуре, тогда он сказал мне, что эта идея была частью мотивации Дональда Кнута, когда он писал “Искусство программирования”, главы которого уже ходили по рукам. Один из главных вопросов Боба тогда был о «языках программирования, предназначенных для чтения людьми также как и машинами». И это было основной мотивацией для частей дизайна COBOL в начале 60-х годов. И, возможно, что более важно в контексте нашей темы, эта идея видна в очень раннем и довольно красиво разработанном интерактивном языке JOSS (в основном Cliff Shaw). Как заметил Фрэнк Смит, литература начинается с идей, которые стоит обсуждать и записывать; она часто частично генерирует представления и расширяет существующие языки и формы; это приводит к новым идеям о чтении и письме; и, наконец, к новым идеям, которые не были частью первоначального мотива. Читать дальше →

Выбор редакции
26 марта 2018, 21:52

[Перевод] Алан Кей: как бы я преподавал Computer Science 101

«Одна из причин, чтобы на самом деле поступить в университет — это выйти за рамки простой профессиональной подготовки и вместо этого уцепиться за более глубокие идеи.» Давайте немного задумаемся над этим вопросом. Несколько лет назад кафедры Computer Science пригласили меня для лекций в ряде университетов. Почти случайно я спросил свою первую аудиторию, состоящую из старшекурсников, аспирантов и профессоров, об их определении «Computer Science». Все смогли дать только инженерное определение. Я проделывал это в каждом новом месте, и везде были похожие результаты. Другим вопросом было: «Кто такой Дуглас Энгельбарт?». Несколько людей сказали: «разве он не был как-то связан с компьютерной мышью?» (и это меня очень разочаровало, поскольку моё научное сообщество приложило много усилий для того, чтобы ответить на этот вопрос было возможно после двух-трех кликов мышки и убедиться, что Энгельбарт действительно был как-то связан с компьютерной мышью). Отчасти проблема заключалась в отсутствии любопытства, отчасти — в узости личных целей, которые не были связаны с обучением, отчасти — отсутствии представления о том, что из себя представляет эта наука, и так далее. Я работаю на полставки на кафедре вычислительной техники Калифорнийского университета несколько лет (по сути я профессор, но мне не нужно ходить на заседания кафедры). Периодически я веду занятия, иногда у первокурсников. За эти годы и без того низкий уровень любопытства к Computer Science значительно снизился (но также возрос уровень популярности, поскольку вычислительная техника рассматривается как путь к хорошо оплачиваемой работе, если вы умеете программировать и получили сертификат в лучшей 10-ке школ). Соответственно, ни один студент ещё не жаловался на то, что первым языком в Калифорнийском университете является С++! Читать дальше →

26 марта 2018, 12:05

Is the Confidence Gap Between Men and Women a Myth?

Paul Garbett for HBR How working women are kept from positions of influence and power is by now well-documented by scientists and journalists alike. While the research has not been specifically remedy-directed, where gender-based bias has been discovered some have sought to counter it with HR policy changes, training, awareness campaigns, equal opportunity legislation, and more. No small part of these countermeasures have been directed at women themselves. One especially pernicious message has been unchallenged for years: that female workers lack the self-confidence of their male peers and this hurts their chances at success. If they were less hesitant and sold themselves better, this logic goes, success would be theirs. Popular business writers thus advise women to “visualize success,” “take the stage,” “rewrite the rules,” and “think differently.” Famously, in 2013, Sheryl Sandberg, the chief operating officer of Facebook and a billionaire, published a book in which her advice for working women was to tell them to “lean in.” The following year, in The Confidence Code: The Science and Art of Self-Assurance – What Women Should Know, authors Katty Kay and Claire Shipman write, “Underqualified and underprepared men don’t think twice about leaning in. Overqualified and overprepared, too many women still hold back. And the confidence gap is an additional lens through which to consider why it is women don’t lean in.” Underlying all these messages is the belief that although the deck may be stacked against women as a group, individual women can break through the glass ceiling if they make certain choices: forego the trappings of femininity, learn the rules of the male-dominated working world, and assert themselves accordingly. In this framing, the aforementioned structural barriers are hurdles to be leaped over with proper mental training. Yet, perhaps challenging common wisdom, recent research shows no evidence of a female modesty effect. In achievement-oriented domains, women rate themselves no lower than their male counterparts in leadership-related dimensions. Moreover, studies are finding no consistent gender differences in self-reported self-confidence. That is, women in today’s organizations seem to see themselves as capable as men of succeeding in their professional roles. In research with Margarita Mayo of IE Business School and Natalia Karelaia of INSEAD, I took a different angle on the confidence factor and its relationship to organizational influence. Regardless of how confident a woman feels, we focused on what we termed self-confidence appearance — that is, the extent to which others perceive a woman as self-confident. Using multisource, time-lag data from a male-dominated technology company employing more than 4,000 people worldwide, we sought to determine how much the appearance of self-confidence increased the extent to which an employee gained influence within the company. We found that even among similarly high-performing workers, appearing self-confident did not translate into influence equally for men and women. For women, but not for men, influence was closely tied to perceptions of warmth — how caring and prosocial they seemed. Moreover, women’s self-reported confidence did not correlate with how confident these women appeared to others. While self-confidence is gender neutral, the consequences of appearing self-confident are not.  The “performance plus confidence equals power and influence” formula is gendered. Successful women cannot “lean in” on a structure that cannot support their weight without their opportunities (and the myth) collapsing around them. Popular messaging about how women must change to appear more self-confident as a key to their success isn’t just false. It also reflects how the burden of managing a gender-diverse workplace is placed on the female employees themselves. Where their male colleagues are allowed to focus on their own objectives, women who are expected to care for others are shouldering an unfair load. This prosocial (double) standard does not appear in any job description but it is, indeed, the key performance indicator against which access, power, and influence will be granted to successful women. Men are held to a lower standard. The takeaway is not, then, that women should forego developing the skills that build their confidence and bolster their performance. Instead, it is that organizational systems and practices should change so that women are rewarded equally. Companies can adopt processes, rules, and safety checks that ensure that all employees are being evaluated according to the same criteria. There are several actions that organizations can take for this purpose: Make job requirements for success explicit. In particular, the HR department could systematically document the broad portfolio of skills (beyond technical expertise) required for success and disseminate such a list among all employees. If employee warmth desired — as might be healthy for a collaborative organizational culture — it should be made an explicit benchmark in employee development, hiring, and evaluation for men as well as women. Moreover, in sectors where women are deeply underrepresented, the explicit expectation of prosocial skills can be helpful in both attracting female candidates and signaling their priority to existing personnel. Monitor promotions and career advancement. The fallacy of meritocracy in male-dominated organizations is well-documented. Paying attention to implicit gender biases in promotion decisions is an important first step for organizations to develop more inclusive cultures. For example, often without making it explicit, performance appraisals contain nearly twice the amount of language about being nice and warm for women than for men. Our results imply that while self-confident men might pass the bar more easily, self-confident women will not unless they show a higher level of warmth. HR can monitor promotion decisions accordingly to avoid that talented women that do not conform to gender role prescriptions end up being penalized. Highlight a wider array of role models. The stereotype of employees in male-dominated professions, such as computer science and engineering, is very narrow and relies on a common assumption: successful employees in these fields have poor interpersonal skills but that this is perfectly fine because only their technical ability matters for being successful. Successful high-tech celebrities are therefore perhaps not surprisingly often portrayed in the popular press as male, bright, dedicated, and often with a lack of interpersonal skills. Organizations can try to combat this is by elevating other role models, with a more varied range of skills and talents that can inspire a more diverse workforce. Although no single study can provide a definitive understanding of gender biases at work, our results highlight the importance for organizations of monitoring how high performing men and women are perceived — by their peers and especially by their supervisors — and how they progress in their careers. Only when organizations make active efforts to uncover gender biases and the processes that perpetuate them will they be able to become closer to the kinds of workplaces we believe in — where our talents and skills are rewarded fairly, regardless of gender.

26 марта 2018, 06:00

Extra: Carol Bartz Full Interview

Stephen Dubner's conversation with the former C.E.O. of Yahoo, recorded for the Freakonomics Radio series “The Secret Life of a C.E.O.” The post Extra: Carol Bartz Full Interview appeared first on Freakonomics.

23 марта 2018, 21:05

This Stanford Computer Science Genius Aims To Crack The Code Of Learning And Leadership

It's not that a white male can't lead. I've done it. It's that we all benefit by being exposed to a diverse cohort of people, working in a diverse community. Because if you're in a leadership position, you're not leading just people who look like you.

Выбор редакции
22 марта 2018, 09:47

[Из песочницы] Алгоритм Пинг-Понг или критика Обратной Польской Нотации

Данная статья написана в силу возмущения тем, что в наших ВУЗах студентов простому разбору математических выражений обучают на основе как раз Обратной Польской Нотации (ОПН), что является откровенным извращением нормальной человеческой логики. Источником описания ОПН будет описание из Лафоре Р.: Л29 Структуры данных и алгоритмы в Java. Классика Computers Science. 2-е изд. — СПб.: Питер, 2013. — 704 с, рекомендованное как наиболее популярное и адекватное по этому вопросу, впрочем как и по другим часто применяемым алгоритмам. Ну то есть сравниваем разные алгоритмы с разной идеологией. Читать дальше →

21 марта 2018, 20:30

"It Wasn't Russia," It Was Obama-Based Social Media Mining That Beat Hillary

Authored by Trent Lapinski via Hackernoon.com, Just over a year ago I wrote, “Did Donald Trump Use Artificial Intelligence To Win The Election?”, which was an article about Cambridge Analytica. If you haven’t been paying attention lately, Facebook just banned Cambridge Analytica from their platform, including the whistleblower that blew the whistle on how Cambridge Analytica was potentially misusing data. Meanwhile, the mainstream media has gone into a frenzy, crashing Facebook’s stock, forcing executives out of the company, and calling for social media reform. This story is seriously a year old, and although I was not the first journalist to write about this, I did bring it to the attention of about 6,300+ readers. While it is new that we have a whistleblower, and the number “50 million accounts”, what the media isn’t telling you is how Cambridge Analytica actually pulled this off. The fact of the matter is the data that Cambridge Analytica acquired from Facebook, much of which they obtained legally and as Facebook intended them to, was entirely useless without machine learning. Machine learning is one of the baby steps required for building artificial intelligence, it is a field of computer science that gives computer systems the ability to “learn” with data without being programmed by a person. It was this technology that Cambridge Analytica used to analyze tens of millions of users profiles using data they acquired from Facebook, and put together psychological profiles of users. They then used Facebook’s targeted advertising system to display ads and content at those users geared towards their own psychological profile. Think of it as persuasive arguing on steroids hyper targeted by your own beliefs about the world. For example, I let Cambridge Analytica analyze my data and they pegged me as a liberal. I know this may be a shocking revelation to all of my haters who believe I’m a conservative Russian agent conspiracy theorist who sells tinfoil hats for a living, but according to my Facebook data I’m a liberal. They also determined I’m more intelligent than 96% of Facebook users, and I somehow lost intelligence points because I like the Beatles (mostly a John Lennon fan), but that is a subject for an entirely different article. What this means is Cambridge Analytica knows I am probably not going to vote for Trump based on my psychological profile. However, that is actually more valuable to them than not. There is little point to spending advertising dollars on people who you know will vote for your candidate, the person you want to target is actually the person who will not vote for your candidate and instead convince them to not vote. This is what I believe Cambridge Analytica did, I believe they targeted liberals in swing states and promoted anti-Hillary content in an attempt to convince people not to vote. It means they had to target fewer people, spend less money, and ultimately accomplish their goal of winning the election. All Cambridge Analytica had to do was figure out who was a liberal, and convince them not to show up on election day. By doing so they could ignore everyone who couldn’t vote for whatever reason, and only spend a small portion of their budget on making sure Trump supporters showed up to vote. A no-vote was actually more powerful in swing states, where guess what? Liberal voter turnout was lower for Hillary than it was for Obama. Granted, I still think Hillary blew it and didn’t campaign properly. I am not excusing her ineptitude, she was a horrible candidate and I did not vote for her (Yes, I voted for the female doctor instead, and no I don’t regret it, lets just not talk about it, okay?). Perhaps this is a just a giant coincidence, but I don’t think it is. Given Donald Trump’s ad spend, I’m fairly certain this is exactly what happened. The reason I believe this is because if I were hired to help win an election and had the tools and data Cambridge Analytica had, it is exactly what I would do. I don’t have anything to prove my theory unfortunately, but given what is being reported I am fairly certain I am on to something, and first came up with this theory a year ago before the media went into freak out mode. This is why psychological profiling, privacy, machine learning, and artificial intelligence is so frightening when put in the wrong hands. It isn’t the machines we need to fear, it is the people who are in control of those machines and our data that we need to worry about. Machines are just tools, but people are fickle and with the right incentive easily corrupted. Facebook, Not Russia, Helped Trump Win The Election We know from Wikileaks that John Podesta was in touch with Sheryl Sandberg, COO of Facebook, and even thanked her for helping them “elect the first woman President of the United States.” However, Hillary didn’t win despite the fact Facebook was trying to get her elected alongside Google who was censoring search results, possibly running their own bots, and Twitter who was censoring the truth and purposely hiding Wikileaks tweets which are 100% factual (as they later admitted to congress). The reason Hillary Clinton did not win despite the media and social media companies doing everything they could to rig the election in her favor is because Facebook double dipped and allowed Cambridge Analytica to use their surveying tools to collect user data on tens of millions of users. This data was then used to target tens of millions of users with political advertising using Facebook’s ad platform based on psycholgoical profiles from data they bought or acquired from Facebook. Facebook is basically responsible for feeding the analytics system that enabled Cambridge Analytica and the Trump campaign to be so targeted and effective with a minimal budget. They ultimately won Donald Trump the swing states and the election. As well as subverted democracy, and likely made Facebook a bunch of money. That’s what happened, that’s how Trump won. It wasn’t the Russians, it was our own social media companies who sold our data to the Trump campaign which they then likely used to convince liberals not to vote in swing states. It’s both horrifying, and cleverly brilliant at the same time. The funny thing is, Obama did something similar in 2012 and liberals celebrated. Not so funny when the other team takes your trick and executes it more effectively now is it? *  *  * If you like this article feel free to send some Bitcoin.

Выбор редакции
20 марта 2018, 20:32

Susceptibility of brain atrophy to TRIB3 in Alzheimer’s disease, evidence from functional prioritization in imaging genetics [Computer Sciences]

The joint modeling of brain imaging information and genetic data is a promising research avenue to highlight the functional role of genes in determining the pathophysiological mechanisms of Alzheimer’s disease (AD). However, since genome-wide association (GWA) studies are essentially limited to the exploration of statistical correlations between genetic variants and...

12 марта 2018, 06:00

Extra: Satya Nadella Full Interview

Stephen Dubner's conversation with the C.E.O. of Microsoft, recorded for the Freakonomics Radio series “The Secret Life of a C.E.O.” The post Extra: Satya Nadella Full Interview appeared first on Freakonomics.

Выбор редакции
09 марта 2018, 11:01

FaceTune is conquering Instagram – but does it take airbrushing too far?

The app that makes Photoshop-style retouching easy is wildly popular with celebrities but has prompted a body image debateThe Oxford English Dictionary chose “selfie” as its word of the year at the end of 2013. At around the same time, four Israeli computer science PhD students and a supreme court clerk had an idea for an app that that would allow regular people to do Photoshop-style retouching of their smartphones photos. That app was FaceTune. Continue reading...

08 марта 2018, 18:13

DXC Technology (DXC) Prices Senior Notes Offering Due 2025

DXC looks to repay part of its existing credit facility with the proceeds of its new senior notes offering worth GBP 250 million due 2025.

08 марта 2018, 07:00

Here’s Why All Your Projects Are Always Late — and What to Do About It

Whether it’s a giant infrastructure plan or a humble kitchen renovation, it’ll inevitably take way too long and cost way too much. That’s because you suffer from “the planning fallacy.” (You also have an “optimism bias” and a bad case of overconfidence.) But don’t worry: we’ve got the solution. The post Here’s Why All Your Projects Are Always Late — and What to Do About It appeared first on Freakonomics.

Выбор редакции
05 марта 2018, 13:36

О нейрокомпьютерах позднего СССР

Заголовок получился, конечно, желтушный. Сразу за него извиняюсь. Сегодня всего лишь хочу поделиться одним занимательным буклетом, который был выпущен институтом computer sciences академии наук СССР в (предположительно) 1989 году. Читать дальше →

Выбор редакции
04 марта 2018, 15:30

Bangladeshi police detain three more men after attack on prominent writer

DHAKA (Reuters) - Bangladeshi police arrested three men on Sunday as they investigate a knife attack on Zafar Iqbal, a popular author and professor of computer science and engineering.

03 марта 2018, 23:37

Американские учёные создали робота для самостоятельного изготовления мебели

Инженеры из Массачусетского технологического института (MIT Computer Science) и Лаборатории искусственного интеллекта (CSAIL) создали робота для нарезки мебельных заготовок.

02 марта 2018, 22:18

College Adds 'Ne', 'Ve', & 'Ey' As Gender-Neutral Pronouns

Authored by Toni Airaksinen via CampusReform.org, The Kennesaw State University LBGT Resource Center recently produced a new pamphlet that adds “ne,” “ve,” “ey,” “ze,” and “xe” to the list of gender neutral pronouns. The “Gender Neutral Pronouns” pamphlet, a copy of which was obtained by Campus Reform, tells students that “some people don’t feel like traditional gender pronouns fit their gender identities,” and thus lists alternatives that students can use instead. These pronouns are accompanied by a conjugation chart listing how they might be used as a subject, object, possessive, possessive pronoun, and reflexive. For example, to refer to a student who identifies as “ne,” one could say “Ne laughed” or “That is nirs.”  To refer to a student who identifies as “ve,” the pamphlet explains that one would say “Vis eyes gleam” or “I called ver.” The pamphlet - which lists seven different types of gender neutral pronouns - encourages students to ask their friends, classmates, and coworkers how they identify before making any assumptions.  The guide does warn, however, that students “may change their pronouns without changing their name, appearance, or gender identity,” and suggests that preferred pronouns be re-confirmed regularly during “check-ins at meetings or in class.”  “It can be tough to remember pronouns at first,” the guide notes. “Correct pronoun use is an easy step toward showing respect for people of every gender.” The guide was first discovered by Francis Hayes, a freshman studying computer science at Kennesaw State, who told Campus Reform that the pamphlet was distributed Tuesday by administrators in the school’s Student Center.  "It is a disgrace, because I thought that my school was one of the few schools left that weren't teaching these things. But when I found this, I felt really disappointed,” he told Campus Reform. “Why is this university entertaining something as useless as this?"  Hayes also criticized the pamphlet for potentially confusing impressionable students, claiming that Kennesaw State is “in the early stages of Cultural Marxification.” “The guide will confuse students regarding what gender they are,” he speculated, adding that “none of those pronouns exist in the English language, so it's pretty much ridiculous that they're trying to teach this."  Kennesaw State media officer Tammy Demel acknowledged a request for comment from Campus Reform, but did not respond in time for publication. 

02 марта 2018, 22:03

5 things Trump did this week while you weren't looking

Outside of the commotion in Washington, Trump's cabinet leaders are reshaping federal policy.

25 апреля 2014, 07:36

Тенденции в ИТ секторе США

С этой Украиной народ совсем все запустил. Учитывая степень накала можно предположить, что ньюсмейкеры искусственно нагоняют истерию, чтобы отвлечь внимание от более глобальных тенденций, как например развал Еврозоны, провал «японского чуда» и политики Абе, затяжная рецессия в США, очередной провал корпоративных отчетов. Кстати, в последнее время говорят о чем угодно, но только не о последних результатах крупнейших мировых гигантов. Что там с ними?  Из 30 наиболее крупных ИТ компаний в США 11 компаний сокращают годовую выручку по сравнению к 2013 году. Это HPQ, IBM, Intel, Western Digital, Computer Sciences, Seagate Technology, Texas Instruments и другие. Наибольшее годовое сокращение выручки у Seagate Technology – почти 15%. С оценкой 5 летних тенденций, то в наихудшем положении Hewlett-Packard, IBM, Computer Sciences и Texas Instruments, у которых выручка находится на 5 летних минимумах. В таблице данные, как сумма за 4 квартала. Но есть и те, кто вырываются вперед – Microsoft, Google, Ingram Micro, Qualcomm. Apple замедляет в росте и переходит в фазу стагнации с последующим сокрушительным обвалом на фоне роста конкуренции. Intel в стагнации, как 3 года. Данные за 1 квартал предварительные, т.к. еще далеко не все отчитались. Но общие тенденции нащупать можно. Примерно 35-40% крупных компаний сокращают бизнес активность, 25-35% компаний в стагнации и еще столько же растут. Отмечу, что рост отмечает в отрасли, связанной так или иначе с мобильными девайсами – либо производство софта, либо реклама на них, или поставки аппаратной части, как Qualcomm. По прибыли.  Здесь еще хуже. Мало компаний, показывающих приращение эффективности. Около 60% компаний сокращают прибыль, либо стагнируют. Относительно стабильный тренд увеличения прибыли у Google, Oracle, Qualcomm. Хотя темпы прироста наименьшие за 3 года.